In the ever-evolving landscape of artificial intelligence, Apple has been quietly pioneering a groundbreaking approach that could redefine how we interact with our Iphones. ReALM, or Reference Resolution as Language Modeling, is a AI model that promises to bring a new level of contextual awareness and…
AI-Powered Gaming Experiences: Infusing High-Quality Experiences with Personality and Creativity
In recent years, the global gaming industry has seen a huge transformation. Driven by a heightened appetite for exciting and rewarding experiences, increasing competition between gaming companies, and more interest in emerging technologies such as AI. AI is already profoundly impacting the games industry and will…
How video game development is fueling the rise of AI
Join pioneering innovator Jack J. McCauley on a journey through video game development and its impact on AI. From the evolution of GPUs and FLOPS to neural networks, delve into the technological tapestry shaping our future….
Top 10 free AI courses you need to know about
We highlight the top 10 free AI courses you need to know about, featuring offerings from Google Cloud, Vanderbilt University, and more….
UK and South Korea to co-host AI Seoul Summit
The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration. The…
10 Best AI Tools for Google Sheets (April 2024)
Harnessing the power of artificial intelligence has become essential for optimizing workflows and maximizing productivity. Google Sheets, along with various third-party platforms and tools, has embraced this trend by integrating AI-powered tools that improve data analysis, automation, and decision-making processes. In this blog post, we’ll explore…
Leveraging the power of AI and cloud computing – CyberTalk
EXECUTIVE SUMMARY:
As cyber adversaries diversify their tactics and devise increasingly sophisticated attack methods, legacy cyber security tools may not be competent enough to block threats. As every cyber security professional knows, novel and sophisticated threats are among the most difficult to prevent or defend against.
That said, the convergence of artificial intelligence (AI) and cloud computing might just be a game-changer. Staying ahead of sophisticated threats requires new thinking, new strategies and sometimes, new tools.
In this article, explore the benefits of AI and cloud. See how these transformative technologies can strengthen cyber security, despite presenting a few small challenges. Plus, become acquainted with a comprehensive AI-powered, cloud-delivered security platform.
Artificial intelligence advantages
Previous generations of cyber security tools have relied on predefined rules to identify threats. In contrast, AI can learn and adapt. It recognizes anomalies and suspicious patterns that could easily escape human analysts. This allows for fast and more accurate threat detection, enabling security teams to respond to incidents quickly — at a pace that could actually prevent the threat from spreading or engendering further damage.
The cloud computing component
Cloud computing offers a secure and scalable platform for deploying AI-powered security solutions. On-premise infrastructure often lacks the processing power and storage capacity needed to train and run complex AI models.
The cloud, on the other hand, provides access to virtually limitless resources, allowing cyber security teams to scale security systems up or down, as needed. In addition, cloud-based AI solutions are readily available and can be deployed very efficiently, reducing the time it takes to implement strong cyber security measures.
AI and cloud computing
Within the cyber security domain, the synergy between AI and cloud computing creates a powerful force-multiplier for productivity and positive outcomes. Here’s a closer look at how AI and cloud work together:
- Data collection and aggregation: Cloud platforms enable the collection and storage of vast amounts of security data from various sources, including network traffic logs, the endpoint and users.
- AI-powered analysis: AI algorithms analyze the aforementioned data to identify threats, predict security incidents and uncover hidden patterns in attacker behavior.
- Automated response: Security teams can leverage AI for automated responses. These include isolating compromised systems, blocking suspicious traffic, and triggering remediation efforts.
Benefits of AI and cloud computing for cyber security professionals
- Reduced time to detection and response: AI can significantly reduce the time it takes to identify and respond to threats, minimizing the potential damage caused by cyber attacks.
- Improved threat hunting: Security teams can utilize AI to proactively hunt for threats within their environments, uncovering hidden vulnerabilities and advanced persistent threats (APTs).
- Enhanced security decision-making: AI provides valuable insights and recommendations, enabling security professionals to make more informed decisions when prioritizing security risks and allocating resources.
- Reduced operational costs: Cloud-based AI eliminates the need for expensive on-premise infrastructure, reducing hardware and software costs associated with traditional security solutions.
Overcoming challenges of AI and cloud computing
While AI and cloud computing offer significant advantages for cyber security, there are also a handful of challenges to remain aware of:
- Data security concerns: Security professionals need to ensure that sensitive data stored in the cloud is protected from unauthorized access. Implementing robust security controls and encryption solutions is crucial.
- Explainability of AI decisions: Understanding how AI models arrive at their conclusions is essential for building trust within security teams. Implementing explainable AI (XAI) techniques can help address this concern.
- Talent shortage: The cyber security industry already faces a skilled workforce shortage. Integrating AI into systems requires that professionals have experience in both cyber security and AI – a not-so-common combination as of yet. Organizations may need to provide training for employees to help bridge knowledge gaps.
Check Point Infinity: A powerful AI & cloud security platform
Check Point Infinity is a comprehensive cloud-delivered security platform that leverages the power of AI and advanced threat prevention technologies to secure organizations around the globe, around the clock.
Check Point Infinity’s AI-powered features, such as ThreatCloud intelligence and SandBlast Zero-Day Protection, enable security professionals to proactively block even the most sophisticated of cyber attacks.
By leveraging the power of AI and cloud computing, Check Point Infinity empowers security teams to strengthen their preventative measures and defenses, streamline security operations, and stay ahead of the most sophisticated threats.
For more insights into cyber security and AI, click here. Lastly, to receive cutting-edge cyber insights, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.
A crossroads for computing at MIT
On Vassar Street, in the heart of MIT’s campus, the MIT Stephen A. Schwarzman College of Computing recently opened the doors to its new headquarters in Building 45. The building’s central location and welcoming design will help form a new cluster of connectivity at MIT and enable the space to have a multifaceted role.
“The college has a broad mandate for computing across MIT,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “The building is designed to be the computing crossroads of the campus. It’s a place to bring a mix of people together to connect, engage, and catalyze collaborations in computing, and a home to a related set of computing research groups from multiple departments and labs.”
“Computing is the defining technology of our time and it will continue to be, well into the future,” says MIT President Sally Kornbluth. “As the people of MIT make progress in high-impact fields from AI to climate, this fantastic new building will enable collaboration across computing, engineering, biological science, economics, and countless other fields, encouraging the cross-pollination of ideas that inspires us to generate fresh solutions. The college has opened its doors at just the right time.”
A physical embodiment
An approximately 178,000 square foot eight-floor structure, the building is designed to be a physical embodiment of the MIT Schwarzman College of Computing’s three-fold mission: strengthen core computer science and artificial intelligence; infuse the forefront of computing with disciplines across MIT; and advance social, ethical, and policy dimensions of computing.
Oriented for the campus community and the public to come in and engage with the college, the first two floors of the building encompass multiple convening areas, including a 60-seat classroom, a 250-seat lecture hall, and an assortment of spaces for studying and social interactions.
Academic activity has commenced in both the lecture hall and classroom this semester with 13 classes for undergraduate and graduate students. Subjects include 6.C35/6.C85 (Interactive Data Visualization and Society), a class taught by faculty from the departments of Electrical Engineering and Computer Science (EECS) and Urban Studies and Planning. The class was created as part of the Common Ground for Computing Education, a cross-cutting initiative of the college that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.
“The new college building is catering not only to educational and research needs, but also fostering extensive community connections. It has been particularly exciting to see faculty teaching classes in the building and the lobby bustling with students on any given day, engrossed in their studies or just enjoying the space while taking a break,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and head of EECS.
The building will also accommodate 50 computing research groups, which correspond to the number of new faculty the college is hiring — 25 in core computing positions and 25 in shared positions with departments at MIT. These groups bring together a mix of new and existing teams in related research areas spanning floors four through seven of the building.
In mid-January, the initial two dozen research groups moved into the building, including faculty from the departments of EECS; Aeronautics and Astronautics; Brain and Cognitive Sciences; Mechanical Engineering; and Economics who are affiliated with the Computer Science and Artificial Intelligence Laboratory and the Laboratory for Information and Decision Systems. The research groups form a coherent overall cluster in deep learning and generative AI, natural language processing, computer vision, robotics, reinforcement learning, game theoretic methods, and societal impact of AI.
More will follow suit, including some of the 10 faculty who have been hired into shared positions by the college with the departments of Brain and Cognitive Sciences; Chemical Engineering; Comparative Media Studies and Writing; Earth, Atmospheric and Planetary Sciences; Music and Theater Arts; Mechanical Engineering; Nuclear Science and Engineering; Political Science; and the MIT Sloan School of Management.
“I eagerly anticipate the building’s expansion of opportunities, facilitating the development of even deeper connections the college has made so far spanning all five schools,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science.
Other college programs and activities that are being supported in the building include the MIT Quest for Intelligence, Center for Computational Science and Engineering, and MIT-IBM Watson AI Lab. There are also dedicated areas for the dean’s office, as well as for the cross-cutting areas of the college — the Social and Ethical Responsibilities of Computing, Common Ground, and Special Semester Topics in Computing, a new experimental program designed to bring MIT researchers and visitors together in a common space for a semester around areas of interest.
Additional spaces include conference rooms on the third floor that are available for use by any college unit. These rooms are accessible to both residents and nonresidents of the building to host weekly group meetings or other computing-related activities.
For the MIT community at large, the building’s main event space, along with three conference rooms, is available for meetings, events, and conferences. Located eight stories high on the top floor with striking views across Cambridge and Boston and of the Great Dome, the event space is already in demand with bookings through next fall, and has quickly become a popular destination on campus.
The college inaugurated the event space over the January Independent Activities Period, welcoming students, faculty, and visitors to the building for Expanding Horizons in Computing — a weeklong series of bootcamps, workshops, short talks, panels, and roundtable discussions. Organized by various MIT faculty, the 12 sessions in the series delved into exciting areas of computing and AI, with topics ranging from security, intelligence, and deep learning to design, sustainability, and policy.
Form and function
Designed by Skidmore, Owings & Merrill, the state-of-the-art space for education, research, and collaboration took shape over four years of design and construction.
“In the design of a new multifunctional building like this, I view my job as the dean being to make sure that the building fulfills the functional needs of the college mission,” says Huttenlocher. “I think what has been most rewarding for me, now that the building is finished, is to see its form supporting its wide range of intended functions.”
In keeping with MIT’s commitment to environmental sustainability, the building is designed to meet Leadership in Energy and Environmental Design (LEED) Gold certification. The final review with the U.S. Green Building Council is tracking toward a Platinum certification.
The glass shingles on the building’s south-facing side serve a dual purpose in that they allow abundant natural light in and form a double-skin façade constructed of interlocking units that create a deep sealed cavity, which is anticipated to notably lower energy consumption.
Other sustainability features include embodied carbon tracking, on-site stormwater management, fixtures that reduce indoor potable water usage, and a large green roof. The building is also the first to utilize heat from a newly completed utilities plant built on top of Building 42, which converted conventional steam-based distributed systems into more efficient hot-water systems. This conversion significantly enhances the building’s capacity to deliver more efficient medium-temperature hot water across the entire facility.
Grand unveiling
A dedication ceremony for the building is planned for the spring.
The momentous event will mark the official completion and opening of the new building and celebrate the culmination of hard work, commitment, and collaboration in bringing it to fruition.
It will also celebrate the 2018 foundational gift that established the college from Stephen A. Schwarzman, the chair, CEO, and co-founder of Blackstone, the global asset management and financial services firm. In addition, it will acknowledge Sebastian Man ’79, SM ’80, the first donor to support the building after Schwarzman. Man’s gift will be recognized with the naming of a key space in the building that will enrich the academic and research activities of the MIT Schwarzman College of Computing and the Institute.
ArtSmart Review: The Easiest & Cheapest AI Image Generator?
Artificial intelligence has touched many industries, and the art and graphic design world is no exception. With the advent of AI art generators, creating high-quality digital art has become easy, affordable, and accessible to everyone, regardless of their technical knowledge or artistic skills. I recently came…
New AI method captures uncertainty in medical images
In biomedicine, segmentation involves annotating pixels from an important structure in a medical image, like an organ or cell. Artificial intelligence models can help clinicians by highlighting pixels that may show signs of a certain disease or anomaly.
However, these models typically only provide one answer, while the problem of medical image segmentation is often far from black and white. Five expert human annotators might provide five different segmentations, perhaps disagreeing on the existence or extent of the borders of a nodule in a lung CT image.
“Having options can help in decision-making. Even just seeing that there is uncertainty in a medical image can influence someone’s decisions, so it is important to take this uncertainty into account,” says Marianne Rakic, an MIT computer science PhD candidate.
Rakic is lead author of a paper with others at MIT, the Broad Institute of MIT and Harvard, and Massachusetts General Hospital that introduces a new AI tool that can capture the uncertainty in a medical image.
Known as Tyche (named for the Greek divinity of chance), the system provides multiple plausible segmentations that each highlight slightly different areas of a medical image. A user can specify how many options Tyche outputs and select the most appropriate one for their purpose.
Importantly, Tyche can tackle new segmentation tasks without needing to be retrained. Training is a data-intensive process that involves showing a model many examples and requires extensive machine-learning experience.
Because it doesn’t need retraining, Tyche could be easier for clinicians and biomedical researchers to use than some other methods. It could be applied “out of the box” for a variety of tasks, from identifying lesions in a lung X-ray to pinpointing anomalies in a brain MRI.
Ultimately, this system could improve diagnoses or aid in biomedical research by calling attention to potentially crucial information that other AI tools might miss.
“Ambiguity has been understudied. If your model completely misses a nodule that three experts say is there and two experts say is not, that is probably something you should pay attention to,” adds senior author Adrian Dalca, an assistant professor at Harvard Medical School and MGH, and a research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
Their co-authors include Hallee Wong, a graduate student in electrical engineering and computer science; Jose Javier Gonzalez Ortiz PhD ’23; Beth Cimini, associate director for bioimage analysis at the Broad Institute; and John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering. Rakic will present Tyche at the IEEE Conference on Computer Vision and Pattern Recognition, where Tyche has been selected as a highlight.
Addressing ambiguity
AI systems for medical image segmentation typically use neural networks. Loosely based on the human brain, neural networks are machine-learning models comprising many interconnected layers of nodes, or neurons, that process data.
After speaking with collaborators at the Broad Institute and MGH who use these systems, the researchers realized two major issues limit their effectiveness. The models cannot capture uncertainty and they must be retrained for even a slightly different segmentation task.
Some methods try to overcome one pitfall, but tackling both problems with a single solution has proven especially tricky, Rakic says.
“If you want to take ambiguity into account, you often have to use an extremely complicated model. With the method we propose, our goal is to make it easy to use with a relatively small model so that it can make predictions quickly,” she says.
The researchers built Tyche by modifying a straightforward neural network architecture.
A user first feeds Tyche a few examples that show the segmentation task. For instance, examples could include several images of lesions in a heart MRI that have been segmented by different human experts so the model can learn the task and see that there is ambiguity.
The researchers found that just 16 example images, called a “context set,” is enough for the model to make good predictions, but there is no limit to the number of examples one can use. The context set enables Tyche to solve new tasks without retraining.
For Tyche to capture uncertainty, the researchers modified the neural network so it outputs multiple predictions based on one medical image input and the context set. They adjusted the network’s layers so that, as data move from layer to layer, the candidate segmentations produced at each step can “talk” to each other and the examples in the context set.
In this way, the model can ensure that candidate segmentations are all a bit different, but still solve the task.
“It is like rolling dice. If your model can roll a two, three, or four, but doesn’t know you have a two and a four already, then either one might appear again,” she says.
They also modified the training process so it is rewarded by maximizing the quality of its best prediction.
If the user asked for five predictions, at the end they can see all five medical image segmentations Tyche produced, even though one might be better than the others.
The researchers also developed a version of Tyche that can be used with an existing, pretrained model for medical image segmentation. In this case, Tyche enables the model to output multiple candidates by making slight transformations to images.
Better, faster predictions
When the researchers tested Tyche with datasets of annotated medical images, they found that its predictions captured the diversity of human annotators, and that its best predictions were better than any from the baseline models. Tyche also performed faster than most models.
“Outputting multiple candidates and ensuring they are different from one another really gives you an edge,” Rakic says.
The researchers also saw that Tyche could outperform more complex models that have been trained using a large, specialized dataset.
For future work, they plan to try using a more flexible context set, perhaps including text or multiple types of images. In addition, they want to explore methods that could improve Tyche’s worst predictions and enhance the system so it can recommend the best segmentation candidates.
This research is funded, in part, by the National Institutes of Health, the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and Quanta Computer.