The theme for this year’s International Women’s day, Count Her In: Invest in Women. Accelerate Progress establishes a poignant tone for fostering authentic change. It perfectly mirrors the dynamic landscape of today’s data-driven environment where change is the only constant. The last third-party cookie has finally…
Researchers enhance peripheral vision in AI models
Peripheral vision enables humans to see shapes that aren’t directly in our line of sight, albeit with less detail. This ability expands our field of vision and can be helpful in many situations, such as detecting a vehicle approaching our car from the side.
Unlike humans, AI does not have peripheral vision. Equipping computer vision models with this ability could help them detect approaching hazards more effectively or predict whether a human driver would notice an oncoming object.
Taking a step in this direction, MIT researchers developed an image dataset that allows them to simulate peripheral vision in machine learning models. They found that training models with this dataset improved the models’ ability to detect objects in the visual periphery, although the models still performed worse than humans.
Their results also revealed that, unlike with humans, neither the size of objects nor the amount of visual clutter in a scene had a strong impact on the AI’s performance.
“There is something fundamental going on here. We tested so many different models, and even when we train them, they get a little bit better but they are not quite like humans. So, the question is: What is missing in these models?” says Vasha DuTell, a postdoc and co-author of a paper detailing this study.
Answering that question may help researchers build machine learning models that can see the world more like humans do. In addition to improving driver safety, such models could be used to develop displays that are easier for people to view.
Plus, a deeper understanding of peripheral vision in AI models could help researchers better predict human behavior, adds lead author Anne Harrington MEng ’23.
“Modeling peripheral vision, if we can really capture the essence of what is represented in the periphery, can help us understand the features in a visual scene that make our eyes move to collect more information,” she explains.
Their co-authors include Mark Hamilton, an electrical engineering and computer science graduate student; Ayush Tewari, a postdoc; Simon Stent, research manager at the Toyota Research Institute; and senior authors William T. Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Ruth Rosenholtz, principal research scientist in the Department of Brain and Cognitive Sciences and a member of CSAIL. The research will be presented at the International Conference on Learning Representations.
“Any time you have a human interacting with a machine — a car, a robot, a user interface — it is hugely important to understand what the person can see. Peripheral vision plays a critical role in that understanding,” Rosenholtz says.
Simulating peripheral vision
Extend your arm in front of you and put your thumb up — the small area around your thumbnail is seen by your fovea, the small depression in the middle of your retina that provides the sharpest vision. Everything else you can see is in your visual periphery. Your visual cortex represents a scene with less detail and reliability as it moves farther from that sharp point of focus.
Many existing approaches to model peripheral vision in AI represent this deteriorating detail by blurring the edges of images, but the information loss that occurs in the optic nerve and visual cortex is far more complex.
For a more accurate approach, the MIT researchers started with a technique used to model peripheral vision in humans. Known as the texture tiling model, this method transforms images to represent a human’s visual information loss.
They modified this model so it could transform images similarly, but in a more flexible way that doesn’t require knowing in advance where the person or AI will point their eyes.
“That let us faithfully model peripheral vision the same way it is being done in human vision research,” says Harrington.
The researchers used this modified technique to generate a huge dataset of transformed images that appear more textural in certain areas, to represent the loss of detail that occurs when a human looks further into the periphery.
Then they used the dataset to train several computer vision models and compared their performance with that of humans on an object detection task.
“We had to be very clever in how we set up the experiment so we could also test it in the machine learning models. We didn’t want to have to retrain the models on a toy task that they weren’t meant to be doing,” she says.
Peculiar performance
Humans and models were shown pairs of transformed images which were identical, except that one image had a target object located in the periphery. Then, each participant was asked to pick the image with the target object.
“One thing that really surprised us was how good people were at detecting objects in their periphery. We went through at least 10 different sets of images that were just too easy. We kept needing to use smaller and smaller objects,” Harrington adds.
The researchers found that training models from scratch with their dataset led to the greatest performance boosts, improving their ability to detect and recognize objects. Fine-tuning a model with their dataset, a process that involves tweaking a pretrained model so it can perform a new task, resulted in smaller performance gains.
But in every case, the machines weren’t as good as humans, and they were especially bad at detecting objects in the far periphery. Their performance also didn’t follow the same patterns as humans.
“That might suggest that the models aren’t using context in the same way as humans are to do these detection tasks. The strategy of the models might be different,” Harrington says.
The researchers plan to continue exploring these differences, with a goal of finding a model that can predict human performance in the visual periphery. This could enable AI systems that alert drivers to hazards they might not see, for instance. They also hope to inspire other researchers to conduct additional computer vision studies with their publicly available dataset.
“This work is important because it contributes to our understanding that human vision in the periphery should not be considered just impoverished vision due to limits in the number of photoreceptors we have, but rather, a representation that is optimized for us to perform tasks of real-world consequence,” says Justin Gardner, an associate professor in the Department of Psychology at Stanford University who was not involved with this work. “Moreover, the work shows that neural network models, despite their advancement in recent years, are unable to match human performance in this regard, which should lead to more AI research to learn from the neuroscience of human vision. This future research will be aided significantly by the database of images provided by the authors to mimic peripheral human vision.”
This work is supported, in part, by the Toyota Research Institute and the MIT CSAIL METEOR Fellowship.
Unpacking the Elon Musk vs. OpenAI Lawsuit
In the rapidly evolving landscape of artificial intelligence, a legal drama has unfolded that captures the intersection of visionary ideals and corporate realities. Elon Musk, a figure synonymous with groundbreaking advancements in technology, has initiated a lawsuit against OpenAI, the AI research organization he co-founded. The…
A Little Less Conversation, A Little More Action: How to Accelerate Generative AI Deployment in the Next 6 Months
Enough daydreaming, enough speculation, enough hype – this is a year of action. According to the McKinsey Global Institute, nearly 50% of typical business activities can now be automated by generative AI (GenAI), a type of artificial intelligence that can produce text, images, video, and synthetic…
Navigating the AI Security Landscape: A Deep Dive into the HiddenLayer Threat Report
In the rapidly advancing domain of artificial intelligence (AI), the HiddenLayer Threat Report, produced by HiddenLayer —a leading provider of security for AI—illuminates the complex and often perilous intersection of AI and cybersecurity. As AI technologies carve new paths for innovation, they simultaneously open the door…
Google engineer stole AI tech for Chinese firms
A former Google engineer has been charged with stealing trade secrets related to the company’s AI technology and secretly working with two Chinese firms. Linwei Ding, a 38-year-old Chinese national, was arrested on Wednesday in Newark, California, and faces four counts of federal trade secret theft,…
Pace of innovation in AI is fierce – but is ethics able to keep up? – AI News
If a week is traditionally a long time in politics, it is a yawning chasm when it comes to AI. The pace of innovation from the leading providers is one thing; the ferocity of innovation as competition hots up is quite another. But are the ethical…
Navigating and managing your organization’s AI risks – CyberTalk
By Hendrik De Bruin, Security Engineer, Check Point Software Technologies.
As you know, 2023 was the year where AI took off. Organizations quickly adopted AI-based products to stay competitive, increase productivity and improve profitability.
However, much of this rapid adoption, which often occurred unofficially, has left organizations to contend with serious cyber security vulnerabilities – and CISOs are exposed.
Secret and confidential information leakage
There have been instances in the past where engineers and developers have uploaded proprietary source code to ChatGPT for purposes of evaluating and improving on the code.
This little oversight could prove extremely costly if a competitor, or anyone with malicious intent, were to illicitly obtain access to ChatGPT’s underlying technology.
How can CISOs protect organizations from AI-related risks?
The CISO role is ever evolving. It appears that artificial intelligence will become another area of responsibility for CISOs globally.
Whether you are the CISO for a Fortune 500 company or a small business, chances are that the organization you represent has already integrated AI into a number of its day-to-day activities.
If not adopted in a controlled and responsible manner, AI does pose a significant potential risk to organizations. The following recommendations may enable CISOs to better manage AI-based risks:
Evaluation of the current situation. Before any risk can be mitigated, it is critical to first have a thorough understanding of the risk. You need to understand the probability of a risk’s likely manifestation. This will ensure that appropriate controls can be put in-place.
In order to better understand the risk posed by artificial intelligence to your information security, the following questions should be answered:
- What AI systems are currently in use? There may be some official use cases and some unofficial (shadow IT) cases where the organisation is making use of artificial intelligence.
- How are these AI systems being used? For what purposes are these systems being used and does the mere usage of these systems pose a risk to the organisation or its reputation?
- What information is used in conjunction with AI systems? Considering the risks involved in managing and processing personal identifiable information (PII), it is critical to know how that information is being used and the implications thereof. The same pertains to confidential and classified information.
Asking the above questions should allow you to identify the most obvious risks posed to the organisation in terms of regulatory and compliance risks, data privacy risks, data leakage risks, and adversarial machine learning risks.
Define and implement administrative controls
Once a thorough understanding of the organisations’ current artificial intelligence landscape has been obtained and risks identified, the next step is to produce policies and procedures that adequately protect the organisation against risks identified during the evaluation stage.
These policies should deal with all aspects of AI usage within the organization. They should also go hand-in-hand with awareness training, ensuring that employees internalize the policies.
Once implemented, adherence to these policies should also be monitored.
Define and implement technical controls
After policies and procedures have been developed and applied, technical controls must be deployed as a means of policy and procedure enforcement.
Arguably, “Defense-in-Depth,” as enforced by solutions leveraging artificial intelligence and machine learning, is your best bet against unknown and increasingly sophisticated threats – such as those facing organizations today.
The human element
In the age of artificial intelligence, the human element may be the most critical “ingredient” in mitigating risks and keeping the organisation safe.
Critical thinking is a human superpower that should be employed to differentiate fact from fiction, so to speak.
The keep-the-human-in-the-loop or Human In The Loop (HITL) approach should be considered. This approach allows AI to make tactical decisions, perhaps even some strategic ones, while humans maintain managerial decision making powers over processes and activities related to these systems. This ensures that humans are always in the loop and available to apply critical thinking, good judgement and oversight.
What does the future hold for AI and cyber security?
During 2024 and over the next few years, I’m certain that adoption of AI will continue to grow on the part of threat actors and defenders alike.
These are “…engines that learn and improve themselves against the kind of attacks we don’t yet know will happen,” says Check Point’s CTO, Dr. Dorit Dor.
It is clear that artificial intelligence is here to stay. Adoption is growing at a phenomenal rate on the part of attackers and defenders alike, however it is end-users and their adoption of AI and generative AI that may pose the biggest risk to organisations and their secret/confidential information.
10 Best Work Management Software & Tools (March 2024)
In today’s fast-paced business environment, efficiency and organization are more crucial than ever. With teams often scattered across various locations and projects growing increasingly complex, the need for effective work management tools has never been greater. These tools not only streamline project management but also enhance…