PromeAI Review: Turning Simple Sketches into Stunning Images

As a creative professional, have you ever stared at a sketch, knowing its potential yet dreading the hours of refining it into a polished design? You’re not alone! Many designers struggle with balancing systematic approaches and creativity in product design, especially in startup environments. PromeAI helps…

Muscle-Powered Robotics: A New Frontier in Biomimetic Engineering

In a notable development in the field of robotics, researchers at ETH Zurich and the Max Planck Institute for Intelligent Systems have unveiled a new robotic leg that mimics biological muscles more closely than ever before. This innovation marks a significant departure from traditional robotics, which…

Startup’s displays engineer light to create immersive experiences without the headsets

One of the biggest reasons virtual reality hasn’t taken off is the clunky headsets that users have to wear. But what if you could get the benefits of virtual reality without the headsets, using screens that computationally improve the images they display?

That’s the goal of the startup Brelyon, which is commercializing a new kind of display and content-rendering approach that immerses users in virtual worlds without requiring them to strap goggles onto their heads.

The displays run light through a processing layer before it reaches users’ eyes, recalculating the image to create ultrawide, visual experiences with depth. The company is also working on a new kind of content-rendering architecture to generate more visually efficient imagery. The result is a 120-inch screen that simulates the sensation of looking out a window into a virtual world, where content pops in and out of existence at different angles and depths, depending on what you feed the display.

“Our current displays use different properties of light, specifically the wavefront of the electric field,” says Brelyon co-founder and CEO Barmak Heshmat, a former postdoc in the Media Lab. “In our newest architecture, the display uses a stack of shader programming empowered with inference microservices to modify and generate content on the fly, amplifying your immersion with the screens.”

Customers are already using Brelyon’s current displays in flight simulators, gaming, defense, and teleoperations, and Heshmat says the company is actively scaling its manufacturing capacity to meet growing demand.

“Wherever you want to increase visual efficiency with screens, Brelyon can help,” Heshmat says. “Optically, these virtual displays allow us to craft a much larger, control-center-like experience without needing added space or wearing headsets, and at the compute level our rerendering architectures allow us to use every bit of that screen in most efficient way.”

Of light and math

Heshmat came to MIT in 2013 as a postdoc in the Media Lab’s Camera Culture group, which is directed by Associate Professor Ramesh Raskar. At the Media Lab, Heshmat worked on computational imaging, which he describes as “combining mathematics with the physics of light to do interesting things.”

With Raskar, Heshmat worked on a new approach to improving ultrafast cameras that used time as an extra dimension in optical design.

“The system essentially sent light through an array of mirrors to make the photons bounce many times inside the camera,” Heshmat explains. “It allowed us to capture the image at many different times.”

Heshmat worked across campus, ultimately publishing papers with five different professors, and says his experience at MIT helped change the way he perceived himself.

“There were many things that I took from MIT,” Heshmat says. “Beyond the technical expertise, I also got the confidence and belief that I could be a leader. That’s what’s different about MIT compared to other schools: It’s a very vibrant, intellectually-triggering environment where everyone’s very driven and everyone’s creating their own universe, in a sense.”

After graduating, Heshmat worked at a virtual reality company, where he noticed that people liked the idea of virtual reality but didn’t like wearing headsets. The observation led him to explore ways of achieving immersion without strapping a device to his head.

The idea brought him back to his research with Raskar at MIT.

“There’s this relationship between imaging and displays; they’re kind of like a dual of each other,” Heshmat explains. “What you can do with imaging, the inverse of it is doable with displays. Since I’d worked on this imaging system at MIT, what’s called time-folded imaging, I thought to try the inverse of that in the world of displays. That was how Brelyon started.”

Brelyon’s first check came from the MIT-affiliated E14 Fund after Heshmat built a prototype of the first device in his living room.

Brelyon’s displays control the angles and focus of light to simulate wide, deep views and give the impression of looking through a window. Brelyon currently sells two displays, Ultra Reality and Ultra Reality Mini. The Ultra Reality display offers a 10-foot-wide display and a depth of around 3 feet. The displays are fully compatible with standard laptops and computers, so users can connect their devices via an HDMI cable and run their favorite simulation or gaming software right away, which Heshmat notes is a key benefit over traditional, headset-based virtual reality displays that require companies to create custom software.

“This is a plug-and-play solution that is much smaller than setting up a projection screen, doesn’t require a dedicated room, doesn’t require a special environment, doesn’t need alignment of projectors or any of that,” Heshmat says.

Processing light

Heshmat says Brelyon has sold displays to some of the largest simulation training companies in the world.

“In simulation training, you usually care about large visualizations and large peripheral fields of view, or situational awareness,” Heshmat says. “That allows you to look around in, say, the cockpit of the airplane. Brelyon allows you to do that in the size of a single desktop monitor.”

Brelyon has been focused on selling its displays to other businesses to date, but Heshmat hopes to eventually sell to individuals and believes the company’s displays hold huge potential for anyone who wants to improve the experience of looking at a monitor.

“Imagine you’re sitting in the backseat of a car, and instead of looking at a 12-inch tablet, you have this 14-inch or 12-inch aperture, but this aperture is looking into a much larger image, so you have a window to an IMAX theater,” Heshmat says.

Ultimately, Heshmat believes Brelyon is opening up a new platform to change the way we perceive the digital world.

“We are adding a new layer of control between the world of computers and what your eyes see,” Heshmat explains. “We have this new proton-processing layer on top of displays, and we think we’re bridging the gap between the experience that you see and the world of computers. We’re trying to connect that programming all the way to the end processing of photons. There are some exciting opportunities that come from that. The displays of future won’t just let the light out just like an array of lamps. They’ll run light through these photon processors and allow you to do much more with light.”

Reflection 70B : LLM with Self-Correcting Cognition and Leading Performance

Reflection 70B is an open-source large language model (LLM) developed by HyperWrite. This new model introduces an approach to AI cognition that could reshape how we interact with and rely on AI systems in numerous fields, from language processing to advanced problem-solving. Leveraging Reflection-Tuning, a groundbreaking…

3 Questions: What does innovation look like in the field of substance use disorder?

In 2020, more than 278,000 people died from substance use disorder with over 91,000 of those from overdoses. Just three years later, deaths from overdoses alone rose by over 25,000. Despite its magnitude, the substance use disorder crisis still faces fundamental challenges: a prevailing societal stigma, lack of knowledge around its origin in the brain, and the slow pace of innovation in comparison to other diseases.

Work at MIT is contributing to meaningful innovations in the field of substance use disorder, according to Hanna Adeyema MBA ’13, director of MIT Bootcamps at MIT Open Learning, and Carolina Haass-Koffler, associate professor of psychiatry and human behavior at Brown University.

Adeyema is leading an upcoming MIT Bootcamps Substance Use Disorder (SUD) Ventures program. She was the chief operating officer and co-founder of Tenacity, a startup based on research from the MIT Media Lab founded to reduce burnout for call center workers. Haass-Koffler is a translational investigator who coalesces preclinical and clinical research towards examining biobehavioral mechanisms of addiction and developing novel medications. She was a finalist for the 2023-24 MIT-Royalty Pharma Prize Competition, an award supporting female entrepreneurs in biotech and the winner of the 2024 Brown Biomedical Innovation to Impact translational commercial development program that supports innovative proof-of-concept projects. In 2023, Haass-Koffler produced a substance use disorder 101 course for the SUD Ventures program and secured non-dilutive funding from the NIH toward work in innovation in this area. Here, Adeyema and Haass-Koffler join in a discussion about the substance use disorder crisis and the future of innovation in this field.

Q: What are the major obstacles to making meaningful advances in substance use disorder research and treatment and/or innovation?

Adeyema: The complexity of the substance use disorder market and the incredible amount of knowledge required to innovate is a major obstacle to bringing research from the bench to market. Innovators must not only understand their technical domain in great detail, but also federal regulations, state regulations, and payers in the health care sector. On top of this, they must know how to pitch to specialized investors, how to sell to hospitals, and understand how to interact with vulnerable populations — often all at the same time.

Given this, solving the substance use disorder epidemic will require a multidisciplinary approach — from health care innovators to researchers to government officials and everyone in between. MIT is the right place to address innovation in the substance use disorder space because we have all of those talented people here and we know how to collaborate to solve societal problems at scale. An example of how we are working together in this way is the collaboration with the National Institutes of Health and the National Institute of Drug Abuse to create the SUD Ventures program. The goal of this program is to fuel the next generation of innovation in substance use disorder with practical applications and a pipeline to securing non-dilutive government funding from Small Business Innovation Research grants.

Haass-Kolffer: Before even mentioning substance use disorder, there are a number of barriers in health care that already exist, such as health insurance reimbursement, limited availability of resources, shortage of clinicians, and more. Specifically in substance use disorder, there are additional barriers affecting patients, clinicians, and innovators. Barriers on the clinical side include, but are not limited to, lack of resources available to providers and lack of time for physicians to include additional substance use disorder assessments in the few minutes that they spend with a patient during a clinical visit. Then on the patient side, the population is often composed of individuals from low socio-economic groups, which adds issues related to stigma, confidentiality and lack of referral network, and generally hinder development of novel substance use disorder treatment interventions. 

At a high level, we lack the integration of substance use disorder prevention, diagnostic, and treatment in health care settings. Without a more holistic integration, advancing substance use disorder research and innovation will continue to be extremely challenging. By creating a collaborative program where we can connect researchers, clinicians, and engineers, we have the opportunity to bring together a dynamic community of peers to tackle the biggest challenges in providing treatment of this debilitating disorder.

Q: How does the SUD Ventures program approach substance use disorder innovation differently?

Adeyema: Traditionally, innovation programs in the substance use disorder space focus on entrepreneurship and business courses for researchers and inventors. These courses focus on knowledge, rather than skills and practical application, and omit an important piece of building a business — it takes an entire ecosystem to build a successful startup, particularly in the health care space.

Our program will bring together the top U.S.-based substance use disorder researchers and experts in other disciplines. We hope to tap into MIT’s engineering excellence, clinical expertise from places like Massachusetts General Hospital, and other academic institutions like Harvard University and Brown University, which is a major center for substance use disorder research. With the vibrant entrepreneurship and biomedical expertise in the Boston ecosystem, we are excited to see how we can bring these incredible forces together. Participants will work together in teams to develop solutions in specific topic areas in substance use disorder. They are guided by MIT-trained entrepreneurs who have successfully funded and scaled companies in the health care space, and have access to a strong group of mentors like Nathaniel Sims, associate professor of anesthesia at Harvard Medical School and the Newbower/Eitan MGH Endowed Chair in Biomedical Technology Innovation at Massachusetts General Hospital.

We recognize the field has many idiosyncratic challenges, and it is also changing very, very fast. To shed light on the most recent and unique roadblocks, the SUD Ventures program will rely on industry case studies delivered by practitioners. These cases will be updated each year to contribute to a body of knowledge participants have access to not only during the program, but also after.

Q: Looking forward, what is the future of innovation in the substance use disorder field, and what are the promising innovations/therapies on the horizon?

Haass-Koffler: The opportunities to develop technologies to treat substance use disorder are infinite. Historically, the approach has been centered on neurobiology, focusing predominantly on the brain. However, substance use disorder is a complex disorder and lacks measurable biomarkers, which complicates its diagnosis and management. Given the brain’s connections with other bodily systems, targeting interventions beyond the central nervous system offers a promising avenue for more effective treatment.

To improve the efficiency of treatment by both researchers and clinicians, we need technological advancements that can probe brain function and monitor treatment responses with greater precision. Innovations in this area could lead to more tailored therapeutic approaches, enable earlier diagnosis, and improve overall patient care.

Just as glucose monitoring changed lives by managing insulin delivery in diabetes, there is a significant opportunity to create similar tools for monitoring medication responses, drug cravings, and preventing adverse events in patients with substance use disorder, affecting their lives tremendously. The future for the substance use disorder crisis is two-fold: it’s about saving lives by preventing overdoses today and improving quality of life by supporting patients throughout their extended treatment journeys. We are innovating and improving on both fronts of the crisis, and I am optimistic about the progress we will continue to make in treating this disease in the next couple of years. With government and political support, we are improving people’s lives and improving society.

The program and its research are supported by the National Institute on Drug Abuse (NIDA) of the National Institutes of Health (NIH). Cynthia Breazeal, a professor of media arts and sciences at the MIT Media Lab and dean for digital learning at MIT Open Learning, serves as the principal investigator (PI) on the grant.

Videoguys is your Source for Storage – Videoguys

Get the performance and reliability of Avid NEXIS® in a more affordable shared storage system designed for small video or audio production teams. Avid NEXIS | PRO offers real-time collaboration to accelerate your media workflow. Find and share media fast. Adapt workspace capacity, performance, and protection as requirements change. And get the same secure and reliable workflows trusted by an industry.

$16,643.00

High-performance 10GbE desktop NAS with industry reliability and support. The desktop NAS built for small business and workgroups. Modern Enterprise file system that automatically creates redundant copies of your data to ensure your files are readily available, even in the case of drive failure.

starting at

$2,999.00

The Facilis HUB 8 is our entry level video production server dedicated to post-production editorial and content creation workflows. With 1GB/sec speeds through standard dual port 10Gb and options like 40Gb Ethernet and Fibre channel, the Facilis 8’s features are not entry-level. This server comes in 32TB, 48TB, 64TB, 96TB, 128TB and 176TB storage capacities, giving you plenty of room for your next project.

starting at

$11,990.00

Teradek Prism Mobile and Core Cloud Platform for Police & Public Safet – Videoguys

In his article, “A Small City Shows a Metro Area of Millions How to Master Live Video for Police and Public Safety,” Tom Mangam showcases how Lafayette, California, is using advanced live video technology to enhance public safety. This small city, located near San Francisco, has implemented cutting-edge video solutions from Teradek, a leader in streaming technology, to improve real-time situational awareness for police and emergency responders. By upgrading their drones and helicopter video feeds, Lafayette is setting an example for how even smaller communities can utilize innovative technology for public safety and emergency preparedness.

Lafayette’s emergency services coordinator, John Cornell, identified the need to enhance video feeds from drones and helicopters, especially in emergencies like wildfires and earthquakes. His search led him to Teradek’s Prism Mobile and Core Cloud Platform, award-winning technology initially designed for the film industry. These tools now provide real-time video streaming via 5G connections, allowing Lafayette’s police and public safety agencies to share live feeds across different jurisdictions. This technology not only improved situational awareness but also led to the formation of the East Bay Interagency Video Network, promoting collaboration among neighboring agencies.

The benefits of Teradek’s technology were quickly realized, such as during the rescue of an injured kite surfer in San Francisco Bay. The clear, live video feed provided rescue teams with vital information, enhancing coordination and response times. By embracing live video streaming technology, Lafayette has significantly improved its emergency response capabilities, showing how small cities can leverage modern tools to increase safety and cooperation across larger metropolitan areas. This case study demonstrates the future of public safety technology and its potential to transform how agencies handle critical situations.

Read the full article by Tom Mangan for Police1 HERE


Learn more about Teradek below:

LiveU introduces LiveU IQ, The Next Dimension of Resilience and Perfor – Videoguys

LiveU’s revolutionary new connectivity offering utilizes artificial intelligence and large data sets to break free of pre-allocated network constraints.


After revolutionizing live video transmission 18 years ago, LiveU is doing it again! LiveU IQ (LIQ™) introduces the next dimension in reliability and trust for IP-video technology, ensuring unparalleled performance and resiliency even in the most challenging locations. LIQ is a revolutionary new technological breakthrough in IP-video transport that propels connection, resiliency and performance to previously unreachable heights.

How Does it Work?
LiveU Reliable Transport (LRT™) is your go-to IP-video protocol, ensuring rock-solid live video transmission by bonding multiple IP links for maximum bandwidth. LiveU IQ amplifies it, playing to the strengths of network operators by automatically swapping to higher performing ones, even in rural or congested network areas.

Samuel Wasserman, LiveU’s co-founder and CEO, said, “Customer-centric innovation is deeply ingrained in LiveU’s DNA. We founded the company to make video production easier, more accessible, and efficient for broadcasters and content creators, and that mission still drives us today. LiveU IQ means that connectivity options are no longer limited to a preselected, fixed set of cellular operators. Now we can dynamically and smartly switch to the best performing network configurations, live and on air.”

“Unlike current cellular bonding set-ups, where one or more modems may be tied to networks that are not performing as well as others for a given location,” continues Wasserman,” LiveU IQ always plays to the strengths of our network operator partners by automatically swapping to higher performing ones, even in rural or congested network areas.”

“LiveU IQ consists of three main elements: LiveU’s own universal eSIMs, able to work with any operator network; a cloud-based decisioning engine working with our big data set of network performance logs; and LiveU Analytics, showing users the dynamic switching events and exactly how their session performance has improved,” added Gideon Gilboa, LiveU’s Chief Product Officer. “With LIQ, it’s easy for our customers to see how they can ‘Go Live. Better.”

Check out the full announcement here!

Learn more about LiveU here!