Everything You Need To Know About Fortnite Chapter 4: Season OG

Fortnite Chapter 4: Season OG goes live in a few hours once its downtime is complete, and it brings players back to 2018’s Chapter 1 map, with throwback weapons, loot, vehicles, and more, too. While it’s technically Season 5 of Fortnite Chapter 4, which follows the previous Last Resort season, developer Epic Games is calling it Season OG, and it makes sense considering how much of Fortnite 2018 is making an appearance.

The throwback nostalgia continues throughout the season because each new update brings with it a different phase of Fortnite Chapter 1’s past, starting with Season 5. That means weapons like the Assault Rifle and Pump Shotgun, and vehicles like Shopping Carts, will be featured in both Zero Build and regular Battle Royale Fortnite. 

Here’s Everything You Need To Know About Fortnite Chapter 4: Season OG

Fortnite Chapter 4 Season OG 5 Nostalgia Throwback Battle Royale Epic Games

While each update listed below brings new weapons and more to Season OG, all based on previous Chapter 1 seasons, Epic says some of the unvaulted gear will remain throughout the updates while others will stay for just that update. Below is part of what’s being unvaulted: 

Season 5: The Return of Tilted, Greasy, and Risky

Fortnite Chapter 4 Season OG 5 Nostalgia Throwback Battle Royale Epic Games

First released during Chapter 1: Season 2, Tilted Towers is a point of interest on the island that will remain until the Season 9 update of Season OG. Alongside Tilted Towers, Epic has added back All Terrain Karts, other vehicles, and some traps from the original Season 5 of Chapter 1. This includes the Assault Rifle, Pump Shotgun, and Hunting Rifles. The Damage Trap, Grappler, and Boogie Bomb are unvaulted now, too, as are the aforementioned ATKs and Shopping Carts.

Season 6: Darkness Rises in Loot Lake

Fortnite Chapter 4 Season OG 5 Nostalgia Throwback Battle Royale Epic Games

After the v27.00 update goes live on November 9, players will find that darkness has risen in Loot Lake, bringing with it weapons from Fortnite Chapter 1: Season 6 like the Double Barrel Shotgun, Clinger, and Six Shooter. Plus, the Quadcrasher, Mounted Turret, and more return alongside the Chiller Trap and Port-a-Fortress (note: the Chiller Trap and Port-a-Fortress will not be in Zero Build modes). And finally, the Driftboard makes its way onto this island with this update as well. 

Seasons 7 and 8: Of Chill and Treasure

Fortnite Chapter 4 Season OG 5 Nostalgia Throwback Battle Royale Epic Games

Update v27.10 will feature both Chapter 1: Season 7 and Season 8 starting November 16 after it goes live. It brings with it a snow biome, Frosty Flgiths, pirate camps, swashbuckling gear, and more. Weapons include the Flint-Knock Pistol, Minigun, and Quad Launcher. Other items include the Poison Dart Trap, Itemized Glider Redeploy, and Buried Treasure (the Poison Dart Trap will not be in Zero Build, though). On the vehicle front, players can use the X-4 Storming, and if you’re struggling to take it down, try the Pirate Cannon that lets you launch teammates into the air. 

Season 9 and X: Blast Off!

Fortnite Chapter 4 Season OG 5 Nostalgia Throwback Battle Royale Epic Games

On November 23, update v27.11 will go live, and it celebrates Fornite Chapter 1: Season 9 and Season X. As such, weapons like the Heavy Sniper Rifle hit the island alongside other loot like the Proximity Grenade Launcher, Air Strike, and Junk Rift. Plus, this update adds the Storm Flip and Jetpack to the mix as well as The Baller. 

Fortnite Chapter 4: Season OG Battle Pass and Item Shop

“A time-traveling, turbo-speed OG season means the all-new OG Pass,” Epic writes in a press release. “Packed with over 50 new in-game items, you can unlock all the cosmetic rewards in the OG Pass in just four weeks. The OG Pass is purchasable for 950 V-Bucks, but you can earn up to 1000 V-Bucks by progressing in the OG Pass. Fortnite Crew Subscriber? The OG Pass is included as part of the Fortnite Crew Subscription.” 

Fortnite Chapter 4 Season OG 5 Nostalgia Throwback Battle Royale Epic Games

In the OG Item Shop, Epic says to expect curated selections of classic, mashup, and new items. The OG Item Shop outfits and accessories will only be available for a limited time, and, per usual, new items will appear daily at 7 p.m. ET.

Here’s a look at one of the OG Item Shop outfits: 

Fortnite Chapter 4 Season OG 5 Nostalgia Throwback Battle Royale Epic Games

Fortnite Chapter 4: Season OG Ranked

As usual with a new season, your rank has been reset in Chapter 4: Season OG. You only need to play one ranked match to have your rank revealed; your performance in your first match does not determine your initial rank placement, but it does reveal your rank and then updates your progress bar based on your performance in that specific match. Epic warns that matchmaking in higher ranks will likely take some time, especially if you were previously in a lower rank. 

“With Fortnite OG being a short season, rank progression will be faster at the higher and middle ranks to accommodate,” Epic writes in a press release. “Players will be penalized less for being eliminated early in a match and will have slightly faster increases to their rank progression bar. Additionally, we’ve made progression faster for players in the Bronze ranks, so friends queueing up together across ranks will see more constant progression.” 

When you jump off the Battle Bus in a ranked match, you’ll be given a Ranked Urgent Quest. By completing certain amounts of Ranked Urgent Quests, you’ll unlock special in-game rewards. Complete just one Ranked Urgent Quest during Fortnite Chapter 4: Season OG to unlock the Chapter 1-inspired Ranker’s Tags Back Bling pictured below. It will display the color of the highest rank you’ve reached this season. 

Fortnite Chapter 4 Season OG 5 Nostalgia Throwback Battle Royale Epic Games

And that’s everything you need to know about Fortnite Chapter 4: Season OG. For more details, including information about ranked cups and a new Metro Boomin lobby track, be sure to check out Epic’s full blog post

For more, watch this nostalgic Tilted Towers throwback Fortnite trailer and then read about how Epic’s chief creative officer and Fortnite head Donald Mustard has left the company


Are you jumping into Fortnite Chapter 4: Season OG? Let us know where you’re dropping first in the comments below!

The Friday Roundup – Filmora Updates & PowerDirector Tips

What is new in Filmora 13 Wondershare have released the new version of Filmora 13 into the wild this week with a bunch of new features. In this version there are a few more A.I. driven tools plus a couple of other upgrades. Some of the…

Innovating for health equity

Throughout her time at MIT, senior Abigail Schipper has volunteered as an EMT with MIT EMS, a student-run ambulance supporting the MIT community as well as Cambridge, Massachusetts, and Boston.

As a first responder, she witnessed a troubling paradox: During the daytime, she would learn about state-of-the-art medical technology in her biomedical engineering classes; however, by night, she witnessed firsthand people with “very preventable conditions sleeping on the doorsteps of the world’s greatest hospitals.”

This has motivated Schipper to devote her career to increasing peoples’ access to medical advancements.

As a mechanical engineering major with a concentration in biomedicine and a minor in biology, Schipper has approached the issue of health equity from a variety of directions, including research, innovation, and community service. As a student researcher, for example, she has worked on a self-dissolving birth control implant, and inspired by her work as a CPR instructor, she helped to create affordable CPR manikins with breasts, for training courses.

As Schipper approaches the end of her time as an undergraduate, she looks forward to expanding her perspective on medicine by pursuing her master’s degree in public health before proceeding to medical school.

“At MIT, I’ve had the chance to look at health care both from the provider perspective as an EMT, and as an engineer through my research. Yet, as I’ve worked on these design challenges, I’m relentlessly confronted by policy problems — the medical debt that keeps one of my patients on the street, or the ever-changing laws around reproductive health. A device can’t work if the system around it is inhospitable,” Schipper says. “Before medical school, I want to spend a year studying the broader health care system and learning the best practices for analyzing and studying these issues of health equity, in order to more effectively solve them.”

“MIT just lets you do things”

Schipper joined MIT EMS as a first-year student in 2020, during the Covid-19 pandemic. Eager to provide help, she began the long training process to become certified as an EMT and join fellow student and staff volunteers on emergency calls. She jokes that she also had another motive: “I thought I would be a cooler person if I knew how to drive an ambulance in Boston.”

In her sophomore year, Schipper organized every CPR class hosted by MIT EMS, certifying thousands of people. She says it was a “strange” activity for a 19-year-old, but concedes, “MIT just lets you do things, it’s funny.”

By junior year, Schipper was elected director of operations. During this time, she worked to acquire a second ambulance vehicle for MIT EMS to broaden their immediate coverage of 911 calls on campus. “Mutual aid is such a big part of what MIT EMS does,” she says, explaining that increasing emergency response times was integral to maintaining the health and well-being of the community MIT EMS serves.

Although “retired” from her executive position, Schipper now works as a crew chief for MIT EMS and leads the Stop the Bleed program, an emergency-care training course for the public that focuses on treating serious wounds.

She credits her EMT experience with linking her to the people of greater Boston. “A third of our calls are on campus but two-thirds are just 911 calls in Cambridge and Boston, so you really get out into the community. I feel a lot more connected to Boston than I think a lot of my peers do,” she says.

Innovating for equity 

During her time as a CPR instructor, Schipper noticed the absence of manikins with breasts in training courses and was often questioned by students if women could even receive the chest compressions required for CPR. “I saw something I was provided and thought, ‘I’m a mechanical engineer. I think I can do better,’” she says.

With a team of bioengineers, mechanical engineers, and social scientists from MIT EMS and Harvard’s Crimson EMS, Schipper co-founded the LifeSaveHer project to produce affordable, anatomically realistic manikins with breasts for gender-equitable CPR training. According to the project’s website, women are 29 percent less likely than men to survive an out-of-hospital cardiac arrest, and they are 27 percent less likely to receive bystander CPR. The group is currently supported by the MIT PKG IDEAS Social Innovation Challenge, which awarded LifeSaveHer the top prize in its 2023 competition.

Gender equity in health was also the focus of Schipper’s undergraduate research project in the lab of Associate Professor Gio Traverso, in which she helped create a birth control implant that dissolves in the body. “Current birth control implants last for about three years and usually require surgical intervention to be taken out,” she explains. “If you don’t have access to surgical facilities, that’s not realistic for you, but you still deserve to have a contraceptive option that’s easy.”

Schipper has taken her research of public health abroad too. In the summer of 2023, she worked with University College London Hospital’s Find and Treat service. In London, she also studied airborne filtration and worked to reduce the risk of infectious disease by building lower-cost carbon dioxide sensors.

The importance of community

In her research and extracurriculars alike, establishing strong social connections has been an integral component in Schipper’s student experience.

Along with MIT EMS, Schipper is a member of Sigma Kappa and the Burton Third Bombers living community. She appreciates the fun opportunities her living community creates, something busy MIT students can struggle finding time for. On her dorm floor, “Whatever silly idea you have, you will have at least 10 people who are willing to do it with you,” she says.

Reflecting on the past three years, Schipper says, “Coming into MIT as a freshman, a lot of the things I was doing felt very disjointed. I’m working on this ambulance but I’m also joining this sorority and I’m also studying mechanical engineering. As I’ve progressed, everything has consolidated and then stemmed from each other. MIT has given me a lot of support and opportunities to find a central thing, and then pursue offshoots in a very meaningful way.”

Industrial network security management solutions 2021

By Shira Landau, Editor-in-Chief, CyberTalk.org

EXECUTIVE SUMMARY:

Industrial control systems deliver water, electricity, fuel and provide other essential services that power millions of enterprises around the world. These systems are susceptible to cyber threats, especially as industry 5.0 increases cyber-physical connectivity. In the recent past, numerous disturbing cases of cyber intrusion have occurred. Industrial network security is mission-critical.

Industrial network security is similar to standard enterprise information system security. However, it does present its own unique challenges. Industrial network security represents a critical business performance indicator. Industrial network security configurations provide insight into business risk exposure, level of corporate competitiveness, and indicate future business continuity, or potential lack thereof.

Systems and networks in industrial control systems (ICSs) retain special features and facets, and are often built on trusted computing platforms with commercial operating systems. Industrial control systems are designed with ‘rugged’ in mind. Most perform reliably for long lengths of time. The typical integrated industrial control system might have a life expectancy that extends for several decades.

The original system designers likely didn’t envision continual cyber-physical security upgrades. But cyber threats are evolving every day. How can industrial network security keep pace?

Industrial network security: An imperative

Improved industrial network security is an imperative. Industrial systems often rely on legacy devices and may run on legacy protocols. These systems were initially developed for long-term use far ahead of the proliferation of internet connectivity, web-based software and real-time enterprise information management portals.

In the early days of industrial networks, information security did not receive much attention. Physical security took priority. Systems were air-gapped, which appeared adequate in terms of cyber security. In the 1990s, as organizations re-engineered business operations and reevaluated operational needs, businesses began to deploy firewalls and other means of blocking attackers. As the years passed, an increasing number of security tactics were tossed into the mix. Nonetheless, industrial network security (INS) needed to play catch-up, and many INS leaders are still doing so today.

Industrial network security: The challenge

International bodies, such as the United Nations, are working to address industrial control system threats. At the same time, industrial organizations must take independent action around cyber security.

One challenge that plagues these systems is that threat defense measures can conflict with core network requirements. To visualize this, consider how CEOs and rank-and-file employees alike often try to skirt cyber security protocols when they slow down productivity. A similar security vs. function tradeoff can occur within industrial system development.

Sophisticated and advanced cyber threats represent a prominent problem for industrial groups. In addition, accidental cyber incidents are a growing concern. For example, an operational system engineer may introduce a network threat during regular technical maintenance.

It’s not just connected networks that are at-risk. Industrial networks that remain disconnected from the internet can still experience cyber intrusions. This can lead to data loss and other untoward business consequences. For instance, a third-party vendor may update systems, but in so doing, connect an unauthorized device that either intentionally or accidentally captures proprietary information.

Industrial network security: The solutions

  • Infrastructure attacks represent imminent threats to industrial groups. Many recent attacks on operational technology (OT) and ICS networks appear based on IT attack vectors, like spear phishing campaigns via email and ransomware on endpoints. Using threat prevention solutions can prevent and eliminate these kinds of attacks before they breach the ICS equipment.
  • An OT engineer may intend to patch systems expeditiously, only to find that the patch is not quick to install, thereby postponing the action, leaving the system unpatched. Operational technology cyber security vendors may be able to offer intrusion prevention systems (IPS) that reduce vulnerabilities through “virtual patching.” This type of solution can protect Windows-based workstations, servers and SCADA equipment.
  • Antivirus and anti-bot technologies can also protect industrial equipment. The software can identify threats before they lead to extreme harm. Malware and bots alike can result in network failures, grinding business operations to a halt.
  • To properly define a security policy, industrial groups must have solutions in place that provide visibility into and understanding of the environment. Visibility means seeing all of the assets within the environment and recognizing what they are and what function they perform. An understanding of granular configurations is also critical.
  • Developing a behavioral baseline for characterization of legitimate traffic can further enhance security. To optimize a security baseline, experts recommend a focus on traffic logging and behavior analysis. Ultimately, organizations should strive for a baseline that can help hunt for threats within the network, detect anomalies and provide other valuable services.

In conclusion

As industry 5.0 evolves, strengthening industrial network security will enable businesses and individuals to operate in safer and more stable environments. The consequences of industrial control system network failures are extreme, and should be avoided at all costs. Avoid being the catalyst of the domino effect by shoring up your organization’s network security.

For more information about industrial network security, click here. Lastly, for more cyber security and business insights, analysis and resources, sign up for the Cyber Talk newsletter.

Here’s The Best Graphics Mode For Call Of Duty: Modern Warfare III

Call of Duty: Modern Warfare III, the latest in the long-running FPS series that’s also the first back-to-back sequel in years, is almost here – in fact, if you’ve preordered a digital version of the game, it’s already here, kinda. That’s because all digital version preorders unlock early access to the Modern Warfare III campaign. That’s eight days before the full release on November 10, which is when its multiplayer suite goes live. Earlier this week, publisher Activision Blizzard and developer Sledgehammer Games released the PC specs and system requirements for the game. Now, we have our hands on Modern Warfare III and can tell you what the best graphics mode is on PlayStation 5 and Xbox Series X/S and what visual settings to use. 

If you’ve played recent Call of Duty entries, you probably already have a good idea of what kind of options are available in the game. But if you’re looking for a refresher, or are a newcomer and want to know how to experience Modern Warfare III at its best on console, we have you covered. 

Here’s The Best Graphics Mode For Call Of Duty: Modern Warfare III

Call of Duty Modern Warfare III 3 Best Graphics Mode Visual Setttings PlayStation 5 PS5 Xbox Series X/S PC Specs Requirements

If you don’t care about any of the reasons why or explanations, I’ll save you the trouble: the best graphics mode for Modern Warfare III is its 120 HZ mode, which can be toggled on and off in the game’s graphics settings. However, it requires a TV or monitor with a 120 HZ refresh rate. 

Let’s talk about why. 

Call of Duty has been pushing out 60 FPS gameplay – by way of 60 HZ refresh rates on TVs and even higher on PCs and monitors – for years now. It feels like a requirement in FPS gaming to default to 60 FPS action. But, with the latest generation of consoles in the PS5 and Xbox Series X/S, Activision Blizzard has been able to bring even higher frame rates to console players previously locked away to those on nice PC rigs. However, like most games that offer 120 FPS gameplay, you need a monitor or TV that can handle it. And while there are some deviations, like 120 HZ refresh rates at 1080p resolution, your TV needs HDMI 2.1 in order to tap into 120 HZ refresh rates with 4K resolutions, HDR, and all the other bells and whistles. And HDMI 2.1 technology is relatively new, becoming more and more applicable with the start of this generation in 2020. 

That said, if you do have a TV or monitor capable of 120 HZ refresh rates, you can go into your console’s video settings to ensure that’s turned on and working as intended. Then, accessing the 120 HZ option in Modern Warfare III is a breeze. If you have the option to play the game at 120 FPS, you absolutely should – with shooters, the smoother (and quicker) the gameplay, the better, especially when competing in multiplayer against other real-world players who might have this advantage. 

Start by accessing the game’s graphics settings, as seen below

Call of Duty Modern Warfare III 3 Best Graphics Mode Visual Setttings PlayStation 5 PS5 Xbox Series X/S PC Specs Requirements

After clicking that, you’ll see a suite of options related to Modern Warfare III’s visual settings. You can check them out in the slideshow below, and take special attention to turn on the 120 HZ (as seen in our first image): 

Once you’ve got that 120 HZ switched on, you’re all set. Enjoy Modern Warfare III’s action at a buttery-smooth 120 FPS. But, as you can see, there are loads of other options and if you’re interested, we break down some of them below: 

  • Field of View: By lowering your FOV, you see less on the screen at any given moment. By increasing it, your field of view increases, meaning you can see more at any moment. By increasing it, you’ll also find that the game feels faster, too, especially when rotating your view. If you’re looking for something lightning fast, akin to the Doom style of gameplay, crank the FOV up. 
  • On-Demand Texture Streaming: If you want the best visual experience while playing Modern Warfare III, turn this setting on. But it requires an online connection. It also requires more storage space. 
  • World Motion Blur: This comes down to preference – if you want the cinematic blur that happens when moving around a space, where buildings, trees, and more blend together while moving, keep this on. However, it does make some people sick and turning it off won’t really hurt the experience. 
  • Weapon Motion Blur: Same as above, except when moving, this blurs your weapon. 
  • Film Grain: This is pure preference – if you like the grainy look on-screen reminiscent of movies, keep it on. If you don’t, feel free to turn it off. 
  • Depth of Field: With this on, the game camera will blur parts of your view to simulate a camera lens. Admittedly, it’s not much of a gamechanger but for some people, it makes focusing exclusively down a weapon’s sights easier. 
  • FidexlityFX CAS: Keep this on – it’s a technology that increases the sharpness of pixels in the image, ultimately making your visual experience nicer. 

I do want to note that at the end of the day, you should just use whatever visual settings and graphics modes you prefer. There is no right or wrong answer – I simply spend too much time tinkering with these types of things and wanted to hopefully give you a quick and easy guide that explains what I believe are the best settings while playing Modern Warfare III. If you have any questions, drop them in the comments below!

For more about the game, read Game Informer’s breakdown of everything we learned from a recent Call of Duty Next livestream, and then check out this story about how Modern Warfare III won’t be coming to Xbox Game Pass this year, even though Xbox now owns Activision Blizzard


Are you jumping into Call of Duty: Modern Warfare III at launch? Let us know in the comments below!

Using language to give robots a better grasp of an open-ended world

Imagine you’re visiting a friend abroad, and you look inside their fridge to see what would make for a great breakfast. Many of the items initially appear foreign to you, with each one encased in unfamiliar packaging and containers. Despite these visual distinctions, you begin to understand what each one is used for and pick them up as needed.

Inspired by humans’ ability to handle unfamiliar objects, a group from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) designed Feature Fields for Robotic Manipulation (F3RM), a system that blends 2D images with foundation model features into 3D scenes to help robots identify and grasp nearby items. F3RM can interpret open-ended language prompts from humans, making the method helpful in real-world environments that contain thousands of objects, like warehouses and households.

F3RM offers robots the ability to interpret open-ended text prompts using natural language, helping the machines manipulate objects. As a result, the machines can understand less-specific requests from humans and still complete the desired task. For example, if a user asks the robot to “pick up a tall mug,” the robot can locate and grab the item that best fits that description.

“Making robots that can actually generalize in the real world is incredibly hard,” says Ge Yang, postdoc at the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions and MIT CSAIL. “We really want to figure out how to do that, so with this project, we try to push for an aggressive level of generalization, from just three or four objects to anything we find in MIT’s Stata Center. We wanted to learn how to make robots as flexible as ourselves, since we can grasp and place objects even though we’ve never seen them before.”

Learning “what’s where by looking”

The method could assist robots with picking items in large fulfillment centers with inevitable clutter and unpredictability. In these warehouses, robots are often given a description of the inventory that they’re required to identify. The robots must match the text provided to an object, regardless of variations in packaging, so that customers’ orders are shipped correctly.

For example, the fulfillment centers of major online retailers can contain millions of items, many of which a robot will have never encountered before. To operate at such a scale, robots need to understand the geometry and semantics of different items, with some being in tight spaces. With F3RM’s advanced spatial and semantic perception abilities, a robot could become more effective at locating an object, placing it in a bin, and then sending it along for packaging. Ultimately, this would help factory workers ship customers’ orders more efficiently.

“One thing that often surprises people with F3RM is that the same system also works on a room and building scale, and can be used to build simulation environments for robot learning and large maps,” says Yang. “But before we scale up this work further, we want to first make this system work really fast. This way, we can use this type of representation for more dynamic robotic control tasks, hopefully in real-time, so that robots that handle more dynamic tasks can use it for perception.”

The MIT team notes that F3RM’s ability to understand different scenes could make it useful in urban and household environments. For example, the approach could help personalized robots identify and pick up specific items. The system aids robots in grasping their surroundings — both physically and perceptively.

“Visual perception was defined by David Marr as the problem of knowing ‘what is where by looking,’” says senior author Phillip Isola, MIT associate professor of electrical engineering and computer science and CSAIL principal investigator. “Recent foundation models have gotten really good at knowing what they are looking at; they can recognize thousands of object categories and provide detailed text descriptions of images. At the same time, radiance fields have gotten really good at representing where stuff is in a scene. The combination of these two approaches can create a representation of what is where in 3D, and what our work shows is that this combination is especially useful for robotic tasks, which require manipulating objects in 3D.”

Creating a “digital twin”

F3RM begins to understand its surroundings by taking pictures on a selfie stick. The mounted camera snaps 50 images at different poses, enabling it to build a neural radiance field (NeRF), a deep learning method that takes 2D images to construct a 3D scene. This collage of RGB photos creates a “digital twin” of its surroundings in the form of a 360-degree representation of what’s nearby.

In addition to a highly detailed neural radiance field, F3RM also builds a feature field to augment geometry with semantic information. The system uses CLIP, a vision foundation model trained on hundreds of millions of images to efficiently learn visual concepts. By reconstructing the 2D CLIP features for the images taken by the selfie stick, F3RM effectively lifts the 2D features into a 3D representation.

Keeping things open-ended

After receiving a few demonstrations, the robot applies what it knows about geometry and semantics to grasp objects it has never encountered before. Once a user submits a text query, the robot searches through the space of possible grasps to identify those most likely to succeed in picking up the object requested by the user. Each potential option is scored based on its relevance to the prompt, similarity to the demonstrations the robot has been trained on, and if it causes any collisions. The highest-scored grasp is then chosen and executed.

To demonstrate the system’s ability to interpret open-ended requests from humans, the researchers prompted the robot to pick up Baymax, a character from Disney’s “Big Hero 6.” While F3RM had never been directly trained to pick up a toy of the cartoon superhero, the robot used its spatial awareness and vision-language features from the foundation models to decide which object to grasp and how to pick it up.

F3RM also enables users to specify which object they want the robot to handle at different levels of linguistic detail. For example, if there is a metal mug and a glass mug, the user can ask the robot for the “glass mug.” If the bot sees two glass mugs and one of them is filled with coffee and the other with juice, the user can ask for the “glass mug with coffee.” The foundation model features embedded within the feature field enable this level of open-ended understanding.

“If I showed a person how to pick up a mug by the lip, they could easily transfer that knowledge to pick up objects with similar geometries such as bowls, measuring beakers, or even rolls of tape. For robots, achieving this level of adaptability has been quite challenging,” says MIT PhD student, CSAIL affiliate, and co-lead author William Shen. “F3RM combines geometric understanding with semantics from foundation models trained on internet-scale data to enable this level of aggressive generalization from just a small number of demonstrations.”

Shen and Yang wrote the paper under the supervision of Isola, with MIT professor and CSAIL principal investigator Leslie Pack Kaelbling and undergraduate students Alan Yu and Jansen Wong as co-authors. The team was supported, in part, by Amazon.com Services, the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research’s Multidisciplinary University Initiative, the Army Research Office, the MIT-IBM Watson Lab, and the MIT Quest for Intelligence. Their work will be presented at the 2023 Conference on Robot Learning.

YoloLiv YoloBox Ultra is the Ultimate YoloBox Experience for Widescree – Videoguys

Now Accepting Pre-Orders for the NEW! YoloBox Ultra
Combines the Multicam Production Power of YoloBox Pro with the Vertical Video Capability of Instream in an All-in-one, Flexible & Affordable Powerhouse!

 

YoloBox Ultra combines the widescreen livestreaming capabilities of the YoloBox and the vertical video capabilities of the Instream in one allowing you to stream to Facebook, YouTube, Instagram and TikTok all from the same device.


$1,599.00 reg.

 $1,499.00 Early Bird Special

Order Now! Shipping Nov 10th!

What’s New:

  • 4 HDMI Inputs: Connect more, amplify connectivity, minimize limitations
  • ISO Recording: Preserve every precious moment
  • Stronger CPU: Innovation taken to new depths and possibilities
  • Cellular Bonding: Deliver rock-solid reliable videos
  • 4K Streaming: Every detail comes alive
  • NDI|HX3 & SRT: New ways to expand and evolve
  • Bigger Battery: 75.48Wh, stream up to 6 hours
  • Bigger & Brighter: See beyond, shine through
YoloBox & Instream In One. Angle it your way.

The combination of the YoloBox Pro and Instream becomes the YoloBox Ultra. Apart from the standard horizontal platforms, stream vertically to Instagram, TikTok without the necessity to purchase an Instream.

  

Mission-Critical Cellular Bonding.
3 SIM Data, 5 Connections.
Experience unparalleled global internet connectivity wherever you go with YoloLiv’s Cellular Bonding. It can bond* up to 5 network connections across 4G LTE (1x), Wi-Fi (1x), Ethernet (1x), and USB Modems (2x) – ensuring your stream never misses a beat event over the most challenging network conditions.

  

Versatile Input & Output
Next Level Production Flexibility

  

    

More Powerful CPU
Innovation taken to new depths & possibilities.
-2.5X CPU Performance
-4X GPU Performance
-2.3X RAM
-8X RAM

       

A Pro Tool, Made For Everyone
From simple tasks to most demanding professional client projects.


Discover a Wide Range of YoloLiv’s High-Quality Products!

2023-24 Takeda Fellows: Advancing research at the intersection of AI and health

The School of Engineering has selected 13 new Takeda Fellows for the 2023-24 academic year. With support from Takeda, the graduate students will conduct pathbreaking research ranging from remote health monitoring for virtual clinical trials to ingestible devices for at-home, long-term diagnostics.

Now in its fourth year, the MIT-Takeda Program, a collaboration between MIT’s School of Engineering and Takeda, fuels the development and application of artificial intelligence capabilities to benefit human health and drug development. Part of the Abdul Latif Jameel Clinic for Machine Learning in Health, the program coalesces disparate disciplines, merges theory and practical implementation, combines algorithm and hardware innovations, and creates multidimensional collaborations between academia and industry.

The 2023-24 Takeda Fellows are:

Adam Gierlach

Adam Gierlach is a PhD candidate in the Department of Electrical Engineering and Computer Science. Gierlach’s work combines innovative biotechnology with machine learning to create ingestible devices for advanced diagnostics and delivery of therapeutics. In his previous work, Gierlach developed a non-invasive, ingestible device for long-term gastric recordings in free-moving patients. With the support of a Takeda Fellowship, he will build on this pathbreaking work by developing smart, energy-efficient, ingestible devices powered by application-specific integrated circuits for at-home, long-term diagnostics. These revolutionary devices — capable of identifying, characterizing, and even correcting gastrointestinal diseases — represent the leading edge of biotechnology. Gierlach’s innovative contributions will help to advance fundamental research on the enteric nervous system and help develop a better understanding of gut-brain axis dysfunctions in Parkinson’s disease, autism spectrum disorder, and other prevalent disorders and conditions.

Vivek Gopalakrishnan

Vivek Gopalakrishnan is a PhD candidate in the Harvard-MIT Program in Health Sciences and Technology. Gopalakrishnan’s goal is to develop biomedical machine-learning methods to improve the study and treatment of human disease. Specifically, he employs computational modeling to advance new approaches for minimally invasive, image-guided neurosurgery, offering a safe alternative to open brain and spinal procedures. With the support of a Takeda Fellowship, Gopalakrishnan will develop real-time computer vision algorithms that deliver high-quality, 3D intraoperative image guidance by extracting and fusing information from multimodal neuroimaging data. These algorithms could allow surgeons to reconstruct 3D neurovasculature from X-ray angiography, thereby enhancing the precision of device deployment and enabling more accurate localization of healthy versus pathologic anatomy.

Hao He

Hao He is a PhD candidate in the Department of Electrical Engineering and Computer Science. His research interests lie at the intersection of generative AI, machine learning, and their applications in medicine and human health, with a particular emphasis on passive, continuous, remote health monitoring to support virtual clinical trials and health-care management. More specifically, He aims to develop trustworthy AI models that promote equitable access and deliver fair performance independent of race, gender, and age. In his past work, He has developed monitoring systems applied in clinical studies of Parkinson’s disease, Alzheimer’s disease, and epilepsy. Supported by a Takeda Fellowship, He will develop a novel technology for the passive monitoring of sleep stages (using radio signaling) that seeks to address existing gaps in performance across different demographic groups. His project will tackle the problem of imbalance in available datasets and account for intrinsic differences across subpopulations, using generative AI and multi-modality/multi-domain learning, with the goal of learning robust features that are invariant to different subpopulations. He’s work holds great promise for delivering advanced, equitable health-care services to all people and could significantly impact health care and AI.

Chengyi Long

Chengyi Long is a PhD candidate in the Department of Civil and Environmental Engineering. Long’s interdisciplinary research integrates the methodology of physics, mathematics, and computer science to investigate questions in ecology. Specifically, Long is developing a series of potentially groundbreaking techniques to explain and predict the temporal dynamics of ecological systems, including human microbiota, which are essential subjects in health and medical research. His current work, supported by a Takeda Fellowship, is focused on developing a conceptual, mathematical, and practical framework to understand the interplay between external perturbations and internal community dynamics in microbial systems, which may serve as a key step toward finding bio solutions to health management. A broader perspective of his research is to develop AI-assisted platforms to anticipate the changing behavior of microbial systems, which may help to differentiate between healthy and unhealthy hosts and design probiotics for the prevention and mitigation of pathogen infections. By creating novel methods to address these issues, Long’s research has the potential to offer powerful contributions to medicine and global health.

Omar Mohd

Omar Mohd is a PhD candidate in the Department of Electrical Engineering and Computer Science. Mohd’s research is focused on developing new technologies for the spatial profiling of microRNAs, with potentially important applications in cancer research. Through innovative combinations of micro-technologies and AI-enabled image analysis to measure the spatial variations of microRNAs within tissue samples, Mohd hopes to gain new insights into drug resistance in cancer. This work, supported by a Takeda Fellowship, falls within the emerging field of spatial transcriptomics, which seeks to understand cancer and other diseases by examining the relative locations of cells and their contents within tissues. The ultimate goal of Mohd’s current project is to find multidimensional patterns in tissues that may have prognostic value for cancer patients. One valuable component of his work is an open-source AI program developed with collaborators at Beth Israel Deaconess Medical Center and Harvard Medical School to auto-detect cancer epithelial cells from other cell types in a tissue sample and to correlate their abundance with the spatial variations of microRNAs. Through his research, Mohd is making innovative contributions at the interface of microsystem technology, AI-based image analysis, and cancer treatment, which could significantly impact medicine and human health.

Sanghyun Park

Sanghyun Park is a PhD candidate in the Department of Mechanical Engineering. Park specializes in the integration of AI and biomedical engineering to address complex challenges in human health. Drawing on his expertise in polymer physics, drug delivery, and rheology, his research focuses on the pioneering field of in-situ forming implants (ISFIs) for drug delivery. Supported by a Takeda Fellowship, Park is currently developing an injectable formulation designed for long-term drug delivery. The primary goal of his research is to unravel the compaction mechanism of drug particles in ISFI formulations through comprehensive modeling and in-vitro characterization studies utilizing advanced AI tools. He aims to gain a thorough understanding of this unique compaction mechanism and apply it to drug microcrystals to achieve properties optimal for long-term drug delivery. Beyond these fundamental studies, Park’s research also focuses on translating this knowledge into practical applications in a clinical setting through animal studies specifically aimed at extending drug release duration and improving mechanical properties. The innovative use of AI in developing advanced drug delivery systems, coupled with Park’s valuable insights into the compaction mechanism, could contribute to improving long-term drug delivery. This work has the potential to pave the way for effective management of chronic diseases, benefiting patients, clinicians, and the pharmaceutical industry.

Huaiyao Peng

Huaiyao Peng is a PhD candidate in the Department of Biological Engineering. Peng’s research interests are focused on engineered tissue, microfabrication platforms, cancer metastasis, and the tumor microenvironment. Specifically, she is advancing novel AI techniques for the development of pre-cancer organoid models of high-grade serous ovarian cancer (HGSOC), an especially lethal and difficult-to-treat cancer, with the goal of gaining new insights into progression and effective treatments. Peng’s project, supported by a Takeda Fellowship, will be one of the first to use cells from serous tubal intraepithelial carcinoma lesions found in the fallopian tubes of many HGSOC patients. By examining the cellular and molecular changes that occur in response to treatment with small molecule inhibitors, she hopes to identify potential biomarkers and promising therapeutic targets for HGSOC, including personalized treatment options for HGSOC patients, ultimately improving their clinical outcomes. Peng’s work has the potential to bring about important advances in cancer treatment and spur innovative new applications of AI in health care. 

Priyanka Raghavan

Priyanka Raghavan is a PhD candidate in the Department of Chemical Engineering. Raghavan’s research interests lie at the frontier of predictive chemistry, integrating computational and experimental approaches to build powerful new predictive tools for societally important applications, including drug discovery. Specifically, Raghavan is developing novel models to predict small-molecule substrate reactivity and compatibility in regimes where little data is available (the most realistic regimes). A Takeda Fellowship will enable Raghavan to push the boundaries of her research, making innovative use of low-data and multi-task machine learning approaches, synthetic chemistry, and robotic laboratory automation, with the goal of creating an autonomous, closed-loop system for the discovery of high-yielding organic small molecules in the context of underexplored reactions. Raghavan’s work aims to identify new, versatile reactions to broaden a chemist’s synthetic toolbox with novel scaffolds and substrates that could form the basis of essential drugs. Her work has the potential for far-reaching impacts in early-stage, small-molecule discovery and could help make the lengthy drug-discovery process significantly faster and cheaper.

Zhiye Song

Zhiye “Zoey” Song is a PhD candidate in the Department of Electrical Engineering and Computer Science. Song’s research integrates cutting-edge approaches in machine learning (ML) and hardware optimization to create next-generation, wearable medical devices. Specifically, Song is developing novel approaches for the energy-efficient implementation of ML computation in low-power medical devices, including a wearable ultrasound “patch” that captures and processes images for real-time decision-making capabilities. Her recent work, conducted in collaboration with clinicians, has centered on bladder volume monitoring; other potential applications include blood pressure monitoring, muscle diagnosis, and neuromodulation. With the support of a Takeda Fellowship, Song will build on that promising work and pursue key improvements to existing wearable device technologies, including developing low-compute and low-memory ML algorithms and low-power chips to enable ML on smart wearable devices. The technologies emerging from Song’s research could offer exciting new capabilities in health care, enabling powerful and cost-effective point-of-care diagnostics and expanding individual access to autonomous and continuous medical monitoring.

Peiqi Wang

Peiqi Wang is a PhD candidate in the Department of Electrical Engineering and Computer Science. Wang’s research aims to develop machine learning methods for learning and interpretation from medical images and associated clinical data to support clinical decision-making. He is developing a multimodal representation learning approach that aligns knowledge captured in large amounts of medical image and text data to transfer this knowledge to new tasks and applications. Supported by a Takeda Fellowship, Wang will advance this promising line of work to build robust tools that interpret images, learn from sparse human feedback, and reason like doctors, with potentially major benefits to important stakeholders in health care.

Oscar Wu

Haoyang “Oscar” Wu is a PhD candidate in the Department of Chemical Engineering. Wu’s research integrates quantum chemistry and deep learning methods to accelerate the process of small-molecule screening in the development of new drugs. By identifying and automating reliable methods for finding transition state geometries and calculating barrier heights for new reactions, Wu’s work could make it possible to conduct the high-throughput ab initio calculations of reaction rates needed to screen the reactivity of large numbers of active pharmaceutical ingredients (APIs). A Takeda Fellowship will support his current project to: (1) develop open-source software for high-throughput quantum chemistry calculations, focusing on the reactivity of drug-like molecules, and (2) develop deep learning models that can quantitatively predict the oxidative stability of APIs. The tools and insights resulting from Wu’s research could help to transform and accelerate the drug-discovery process, offering significant benefits to the pharmaceutical and medical fields and to patients.

Soojung Yang

Soojung Yang is a PhD candidate in the Department of Materials Science and Engineering. Yang’s research applies cutting-edge methods in geometric deep learning and generative modeling, along with atomistic simulations, to better understand and model protein dynamics. Specifically, Yang is developing novel tools in generative AI to explore protein conformational landscapes that offer greater speed and detail than physics-based simulations at a substantially lower cost. With the support of a Takeda Fellowship, she will build upon her successful work on the reverse transformation of coarse-grained proteins to the all-atom resolution, aiming to build machine-learning models that bridge multiple size scales of protein conformation diversity (all-atom, residue-level, and domain-level). Yang’s research holds the potential to provide a powerful and widely applicable new tool for researchers who seek to understand the complex protein functions at work in human diseases and to design drugs to treat and cure those diseases.

Yuzhe Yang

Yuzhe Yang is a PhD candidate in the Department of Electrical Engineering and Computer Science. Yang’s research interests lie at the intersection of machine learning and health care. In his past and current work, Yang has developed and applied innovative machine-learning models that address key challenges in disease diagnosis and tracking. His many notable achievements include the creation of one of the first machine learning-based solutions using nocturnal breathing signals to detect Parkinson’s disease (PD), estimate disease severity, and track PD progression. With the support of a Takeda Fellowship, Yang will expand this promising work to develop an AI-based diagnosis model for Alzheimer’s disease (AD) using sleep-breathing data that is significantly more reliable, flexible, and economical than current diagnostic tools. This passive, in-home, contactless monitoring system — resembling a simple home Wi-Fi router — will also enable remote disease assessment and continuous progression tracking. Yang’s groundbreaking work has the potential to advance the diagnosis and treatment of prevalent diseases like PD and AD, and it offers exciting possibilities for addressing many health challenges with reliable, affordable machine-learning tools. 

How AI Boosts Fintech: 7 Promising AI-Powered Industries To Follow

When Willie Sutton, once one of America’s most wanted fugitives, was asked why he robbed banks, his response was remarkably simple, “Because that’s where the money is.” This is the same answer that could be given to those who inquire about the growing tendency towards regulation…