Artificial intelligence (AI) is everywhere these days. It is helping us shop online, diagnose diseases, and even drive cars. But as AI systems get more advanced, they also get more complicated. And when things are complicated, they are harder to change, scale, or fix. That is…
Q&A: Transforming research through global collaborations
The MIT Global Seed Funds (GSF) program fosters global research collaborations with MIT faculty and their peers abroad — creating partnerships that tackle complex global issues, from climate change to health-care challenges and beyond. Administered by the MIT Center for International Studies (CIS), the GSF program has awarded more than $26 million to over 1,200 faculty research projects since its inception in 2008. Through its unique funding structure — comprising a general fund for unrestricted geographical use and several specific funds within individual countries, regions, and universities — GSF supports a wide range of projects. The current call for proposals from MIT faculty and researchers with principal investigator status is open until Dec. 10.
CIS recently sat down with faculty recipients Josephine Carstensen and David McGee to discuss the value and impact GSF added to their research. Carstensen, the Gilbert W. Winslow Career Development Associate Professor of Civil and Environmental Engineering, generates computational designs for large-scale structures with the intent of designing novel low-carbon solutions. McGee, the William R. Kenan, Jr. Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS), reconstructs the patterns, pace, and magnitudes of past hydro-climate changes.
Q: How did the Global Seed Funds program connect you with global partnerships related to your research?
Carstensen: One of the projects my lab is working on is to unlock the potential of complex cast-glass structures. Through our GSF partnership with researchers at TUDelft (Netherlands), my group was able to leverage our expertise in generative design algorithms alongside the TUDelft team, who are experts in the physical casting and fabrication of glass structures. Our initial connection to TUDelft was actually through one of my graduate students who was at a conference and met TUDelft researchers. He was inspired by their work and felt there could be synergy between our labs. The question then became: How do we connect with TUDelft? And that was what led us to the Global Seed Funds program.
McGee: Our research is based in fieldwork conducted in partnership with experts who have a rich understanding of local environments. These locations range from lake basins in Chile and Argentina to caves in northern Mexico, Vietnam, and Madagascar. GSF has been invaluable for helping foster partnerships with collaborators and universities in these different locations, enabling the pilot work and relationship-building necessary to establish longer-term, externally funded projects.
Q: Tell us more about your GSF-funded work.
Carstensen: In my research group at MIT, we live mainly in a computational regime, and we do very little proof-of-concept testing. To that point, we do not even have the facilities nor experience to physically build large-scale structures, or even specialized structures. GSF has enabled us to connect with the researchers at TUDelft who do much more experimental testing than we do. Being able to work with the experts at TUDelft within their physical realm provided valuable insights into their way of approaching problems. And, likewise, the researchers at TUDelft benefited from our expertise. It has been fruitful in ways we couldn’t have imagined within our lab at MIT.
McGee: The collaborative work supported by the GSF has focused on reconstructing how past climate changes impacted rainfall patterns around the world, using natural archives like lake sediments and cave formations. One particularly successful project has been our work in caves in northeastern Mexico, which has been conducted in partnership with researchers from the National Autonomous University of Mexico (UNAM) and a local caving group. This project has involved several MIT undergraduate and graduate students, sponsored a research symposium in Mexico City, and helped us obtain funding from the National Science Foundation for a longer-term project.
Q: You both mentioned the involvement of your graduate students. How exactly has the GSF augmented the research experience of your students?
Carstensen: The collaboration has especially benefited the graduate students from both the MIT and TUDelft teams. The opportunity presented through this project to engage in research at an international peer institution has been extremely beneficial for their academic growth and maturity. It has facilitated training in new and complementary technical areas that they would not have had otherwise and allowed them to engage with leading world experts. An example of this aspect of the project’s success is that the collaboration has inspired one of my graduate students to actively pursue postdoc opportunities in Europe (including at TU Delft) after his graduation.
McGee: MIT students have traveled to caves in northeastern Mexico and to lake basins in northern Chile to conduct fieldwork and build connections with local collaborators. Samples enabled by GSF-supported projects became the focus of two graduate students’ PhD theses, two EAPS undergraduate senior theses, and multiple UROP [Undergraduate Research Opportunity Program] projects.
Q: Were there any unexpected benefits to the work funded by GSF?
Carstensen: The success of this project would not have been possible without this specific international collaboration. Both the Delft and MIT teams bring highly different essential expertise that has been necessary for the successful project outcome. It allowed both the Delft and MIT teams to gain an in-depth understanding of the expertise areas and resources of the other collaborators. Both teams have been deeply inspired. This partnership has fueled conversations about potential future projects and provided multiple outcomes, including a plan to publish two journal papers on the project outcome. The first invited publication is being finalized now.
McGee: GSF’s focus on reciprocal exchange has enabled external collaborators to spend time at MIT, sharing their work and exchanging ideas. Other funding is often focused on sending MIT researchers and students out, but GSF has helped us bring collaborators here, making the relationship more equal. A GSF-supported visit by Argentinian researchers last year made it possible for them to interact not just with my group, but with students and faculty across EAPS.
Federal Court Ruling Sets Landmark Precedent for AI Cheating in Schools
The intersection of artificial intelligence and academic integrity has reached a pivotal moment with a groundbreaking federal court decision in Massachusetts. At the heart of this case lies a collision between emerging AI technology and traditional academic values, centered on a high-achieving student’s use of Grammarly’s…
The Transformative Power of AI Devices: Driving Toward an AI-First Future
AI-driven devices have evolved from novelty to necessity. AI assistants that manage tasks, cameras with real-time object detection, wearables tracking health and behavioral metrics, and similar devices are no longer futuristic concepts—they’re reshaping how companies operate across nearly every industry. But with this rapid advancement comes…
Josh Ray, Founder and CEO of Blackwire Labs, – Interview Series
Josh Ray is the founder and CEO of Blackwire Labs, bringing over 24 years of experience in combating advanced cyber threats across commercial, private, public, and military sectors. As a U.S. Navy veteran and cybersecurity executive, Ray has consistently built and led high-performing teams to protect…
One of Those “Onboarding” UIs, With Anchor Positioning
We can anchor one element to another. We can also attach one element to multiple anchors. In this experiment, Ryan riffs on those ideas and comes up with a new way to transition between two anchors and the result is a practical use case that would…
Photonic processor could enable ultrafast AI computations with extreme energy efficiency
The deep neural network models that power today’s most demanding machine-learning applications have grown so large and complex that they are pushing the limits of traditional electronic computing hardware.
Photonic hardware, which can perform machine-learning computations with light, offers a faster and more energy-efficient alternative. However, there are some types of neural network computations that a photonic device can’t perform, requiring the use of off-chip electronics or other techniques that hamper speed and efficiency.
Building on a decade of research, scientists from MIT and elsewhere have developed a new photonic chip that overcomes these roadblocks. They demonstrated a fully integrated photonic processor that can perform all the key computations of a deep neural network optically on the chip.
The optical device was able to complete the key computations for a machine-learning classification task in less than half a nanosecond while achieving more than 92 percent accuracy — performance that is on par with traditional hardware.
The chip, composed of interconnected modules that form an optical neural network, is fabricated using commercial foundry processes, which could enable the scaling of the technology and its integration into electronics.
In the long run, the photonic processor could lead to faster and more energy-efficient deep learning for computationally demanding applications like lidar, scientific research in astronomy and particle physics, or high-speed telecommunications.
“There are a lot of cases where how well the model performs isn’t the only thing that matters, but also how fast you can get an answer. Now that we have an end-to-end system that can run a neural network in optics, at a nanosecond time scale, we can start thinking at a higher level about applications and algorithms,” says Saumil Bandyopadhyay ’17, MEng ’18, PhD ’23, a visiting scientist in the Quantum Photonics and AI Group within the Research Laboratory of Electronics (RLE) and a postdoc at NTT Research, Inc., who is the lead author of a paper on the new chip.
Bandyopadhyay is joined on the paper by Alexander Sludds ’18, MEng ’19, PhD ’23; Nicholas Harris PhD ’17; Darius Bunandar PhD ’19; Stefan Krastanov, a former RLE research scientist who is now an assistant professor at the University of Massachusetts at Amherst; Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research; Matthew Streshinsky, a former silicon photonics lead at Nokia who is now co-founder and CEO of Enosemi; Michael Hochberg, president of Periplous, LLC; and Dirk Englund, a professor in the Department of Electrical Engineering and Computer Science, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE, and senior author of the paper. The research appears today in Nature Photonics.
Machine learning with light
Deep neural networks are composed of many interconnected layers of nodes, or neurons, that operate on input data to produce an output. One key operation in a deep neural network involves the use of linear algebra to perform matrix multiplication, which transforms data as it is passed from layer to layer.
But in addition to these linear operations, deep neural networks perform nonlinear operations that help the model learn more intricate patterns. Nonlinear operations, like activation functions, give deep neural networks the power to solve complex problems.
In 2017, Englund’s group, along with researchers in the lab of Marin Soljačić, the Cecil and Ida Green Professor of Physics, demonstrated an optical neural network on a single photonic chip that could perform matrix multiplication with light.
But at the time, the device couldn’t perform nonlinear operations on the chip. Optical data had to be converted into electrical signals and sent to a digital processor to perform nonlinear operations.
“Nonlinearity in optics is quite challenging because photons don’t interact with each other very easily. That makes it very power consuming to trigger optical nonlinearities, so it becomes challenging to build a system that can do it in a scalable way,” Bandyopadhyay explains.
They overcame that challenge by designing devices called nonlinear optical function units (NOFUs), which combine electronics and optics to implement nonlinear operations on the chip.
The researchers built an optical deep neural network on a photonic chip using three layers of devices that perform linear and nonlinear operations.
A fully-integrated network
At the outset, their system encodes the parameters of a deep neural network into light. Then, an array of programmable beamsplitters, which was demonstrated in the 2017 paper, performs matrix multiplication on those inputs.
The data then pass to programmable NOFUs, which implement nonlinear functions by siphoning off a small amount of light to photodiodes that convert optical signals to electric current. This process, which eliminates the need for an external amplifier, consumes very little energy.
“We stay in the optical domain the whole time, until the end when we want to read out the answer. This enables us to achieve ultra-low latency,” Bandyopadhyay says.
Achieving such low latency enabled them to efficiently train a deep neural network on the chip, a process known as in situ training that typically consumes a huge amount of energy in digital hardware.
“This is especially useful for systems where you are doing in-domain processing of optical signals, like navigation or telecommunications, but also in systems that you want to learn in real time,” he says.
The photonic system achieved more than 96 percent accuracy during training tests and more than 92 percent accuracy during inference, which is comparable to traditional hardware. In addition, the chip performs key computations in less than half a nanosecond.
“This work demonstrates that computing — at its essence, the mapping of inputs to outputs — can be compiled onto new architectures of linear and nonlinear physics that enable a fundamentally different scaling law of computation versus effort needed,” says Englund.
The entire circuit was fabricated using the same infrastructure and foundry processes that produce CMOS computer chips. This could enable the chip to be manufactured at scale, using tried-and-true techniques that introduce very little error into the fabrication process.
Scaling up their device and integrating it with real-world electronics like cameras or telecommunications systems will be a major focus of future work, Bandyopadhyay says. In addition, the researchers want to explore algorithms that can leverage the advantages of optics to train systems faster and with better energy efficiency.
This research was funded, in part, by the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and NTT Research.
Salesforce: UK set to lead agentic AI revolution
Salesforce has unveiled the findings of its UK AI Readiness Index, signalling the nation is in a position to spearhead the next wave of AI innovation, also known as agentic AI. The report places the UK ahead of its G7 counterparts in terms of AI adoption…
15+ Best Resume & CV Video Templates – Speckyboy
We often define resumes and CVs as static documents. We print them out or post them online – but that’s the end of the story. Or is it?
Video offers a different way to tell your story. You can use it to show off your skills by adding movement and special effects. And it’s far more memorable than any old document.
If the idea is to capture an employer’s eye, video is the format to achieve it. The challenge is putting together a top-notch presentation. Even experienced videographers may be stretched to their limits.
Not to worry! Our collection of resume and CV video templates will make your task easier. They feature beautiful designs and professional effects. Grab one, customize it, and share it with the world.
The items below include all the features you’ll need. Plus, there are picks for all the top video editing suites, such as After Effects, Premiere Pro, DaVinci Resolve, and Final Cut Pro. Let’s get started!
You might also like our collection of personal portfolio video templates.
Use this resume template to provide employers with an attractive overview. It comes with seven information-rich slides you can customize to fit your needs. Feature your biography, education, and past projects with this package for Premiere Pro.
Here’s a beautiful After Effects template to turn your resume into a thing of beauty. The silky-smooth animations and transitions will bring your skills to life. Prospective employers won’t be able to take their eyes off you.
This template for Premiere Pro is optimized for mobile screens – perfect for sharing with busy executives on the go. It’s attractive, with creative layouts and modern animations. The result is a high-quality video that checks all the boxes.
Make your CV a modern masterpiece using this Premiere Pro template. It features ten outstanding slides for displaying facts and figures. You can easily change the colors and fonts to match your personal brand.
Do you want a resume with a high-tech aesthetic? Use this After Effects template to give your info a professional touch. It includes a suite of outstanding effects along with dark and light modes.
This template for After Effects features fun animations and razor-sharp design elements. The look is friendly and inviting, with bold type and color choices. You’ll have a beautiful resume that leaves a great impression.
You’ll find plenty of options to list your technical skills with this template. Use the included skills chart to share your areas of expertise. There are also spots for your past projects and contact details.
Unlock your resume’s potential with this After Effects package. The animations are stunning but won’t distract from the details of your CV. It’s also built for easy customization – change colors and fonts with just a few clicks.
Here’s a big and bold way to impress prospective employers. The presentation is seamless and smooth, with bold type and lots of movement. Add your photos and video clips to personalize the viewing experience.
Take advantage of beautiful lighting effects with this resume template for Final Cut Pro. Inside, you’ll find 15 text placeholders and five spots for media. It’s an excellent fit for designers or anyone who wants to present artistic flair.
Put your skills and experience to the forefront using this clean After Effects template. There’s room to highlight your strong points and display past work. It’s a slick package that makes you look your best.
Crisp and colorful, this template will make your personality come through the screen. It features fun shapes and attention-getting transitions. Viewers are sure to take notice with this exciting video resume.
Those who want a modern aesthetic will love this Premiere Pro template. It combines a high-contrast color scheme with minimalistic typography. There’s meticulous attention to detail here that employers will remember.
This DaVinci Resolve template features eye-catching special effects to highlight your skills. Color and movement are everywhere and serve as a fine background for your CV. It’s a great choice for visual artists and content creators.
Bring a cinematic quality to your video resume by customizing this template. It comes packed with six color profiles and incredible animation effects. Choose this one if you want to stand out from the crowd.
Introduce Yourself with a Video Resume
A strong resume is a vital tool for job seekers. So, why not go the extra mile to introduce yourself to prospective employers and clients? A compelling video presentation can be a difference maker.
The templates above will help you make a great first impression. They offer a variety of styles that give your resume a professional touch. What’s not to love?
We hope you found the perfect template to help you land your dream job!
Related Topics
Alibaba QwQ Really Impresses at GPT-o1 Levels
The new model matches and surpasses GPT-o1 on reasoning tasks….