Phi-3 and OpenELM, two major small model releases this week….
Exploring the history of data-driven arguments in public life
Political debates today may not always be exceptionally rational, but they are often infused with numbers. If people are discussing the economy or health care or climate change, sooner or later they will invoke statistics.
It was not always thus. Our habit of using numbers to make political arguments has a history, and William Deringer is a leading historian of it. Indeed, in recent years Deringer, an associate professor in MIT’s Program in Science, Technology, and Society (STS), has carved out a distinctive niche through his scholarship showing how quantitative reasoning has become part of public life.
In his prize-winning 2018 book “Calculated Values” (Harvard University Press), Deringer identified a time in British public life from the 1680s to the 1720s as a key moment when the practice of making numerical arguments took hold — a trend deeply connected with the rise of parliamentary power and political parties. Crucially, freedom of the press also expanded, allowing greater scope for politicians and the public to have frank discussions about the world as it was, backed by empirical evidence.
Deringer’s second book project, in progress and under contract to Yale University Press, digs further into a concept from the first book — the idea of financial discounting. This is a calculation to estimate what money (or other things) in the future is worth today, to assign those future objects a “present value.” Some skilled mathematicians understood discounting in medieval times; its use expanded in the 1600s; today it is very common in finance and is the subject of debate in relation to climate change, as experts try to estimate ideal spending levels on climate matters.
“The book is about how this particular technique came to have the power to weigh in on profound social questions,” Deringer says. “It’s basically about compound interest, and it’s at the center of the most important global question we have to confront.”
Numbers alone do not make a debate rational or informative; they can be false, misleading, used to entrench interests, and so on. Indeed, a key theme in Deringer’s work is that when quantitiative reasoning gains more ground, the question is why, and to whose benefit. In this sense his work aligns with the long-running and always-relevant approach of the Institute’s STS faculty, in thinking carefully about how technology and knowledge is applied to the world.
“The broader culture more has become attuned to STS, whether it’s conversations about AI or algorithmic fairness or climate change or energy, these are simultaneously technical and social issues,” Deringer says. “Teaching undergraduates, I’ve found the awareness of that at MIT has only increased.” For both his research and teaching, Deringer received tenure from MIT earlier this year.
Dig in, work outward
Deringer has been focused on these topics since he was an undergraduate at Harvard University.
“I found myself becoming really interested in the history of economics, the history of practical mathematics, data, statistics, and how it came to be that so much of our world is organized quantitatively,” he says.
Deringer wrote a college thesis about how England measured the land it was seizing from Ireland in the 1600s, and then, after graduating, went to work in the finance sector, which gave him a further chance to think about the application of quantification to modern life.
“That was not what I wanted to do forever, but for some of the conceptual questions I was interested in, the societal life of calculations, I found it to be a really interesting space,” Deringer says.
He returned to academia by pursuing his PhD in the history of science at Princeton University. There, in his first year of graduate school, in the archives, Deringer found 18th-century pamphlets about financial calculations concering the value of stock involved in the infamous episode of speculation known as the South Sea Bubble. That became part of his dissertation; skeptics of the South Sea Bubble were among the prominent early voices bringing data into public debates. It has also helped inform his second book.
First, though, Deringer earned his doctorate from Princeton in 2012, then spent three years as a Mellon Postdoctoral Research Fellow at Columbia University. He joined the MIT faculty in 2015. At the Institute, he finished turning his dissertation into the “Calculated Values” book — which won the 2019 Oscar Kenshur Prize for the best book from the Center for Eighteenth-Century Studies at Indiana University, and was co-winner of the 2021 Joseph J. Spengler Prize for best book from the History of Economics Society.
“My method as a scholar is to dig into the technical details, then work outward historically from them,” Deringer says.
A long historical chain
Even as Deringer was writing his first book, the idea for the second one was taking root in his mind. Those South Sea Bubble pamphets he had found while at Princeton incorporated discounting, which was intermittently present in “Calculated Values.” Deringer was intrigued by how adept 18th-century figures were at discounting.
“Something that I thought of as a very modern technique seemed to be really well-known by a lot of people in the 1720s,” he says.
At the same time, a conversation with an academic colleague in philosophy made it clear to Deringer how different conclusions about discounting had become debated in climate change policy. He soon resolved to write the “biography of a calculation” about financial discounting.
“I knew my next book had to be about this,” Deringer says. “I was very interested in the deep historical roots of discounting, and it has a lot of present urgency.”
Deringer says the book will incorporate material about the financing of English cathedrals, the heavy use of discounting in the mining industry during the Industrial Revolution, a revival of discounting in 1960s policy circles, and climate change, among other things. In each case, he is carefully looking at the interests and historical dynamics behind the use of discounting.
“For people who use discounting regularly, it’s like gravity: It’s very obvious that to be rational is to discount the future according to this formula,” Deringer says. “But if you look at history, what is thought of as rational is part of a very long historical chain of people applying this calculation in various ways, and over time that’s just how things are done. I’m really interested in pulling apart that idea that this is a sort of timeless rational calculation, as opposed to a product of this interesting history.”
Working in STS, Deringer notes, has helped encourage him to link together numerous historical time periods into one book about the numerous ways discounting has been used.
“I’m not sure that pursuing a book that stretches from the 17th century to the 21st century is something I would have done in other contexts,” Deringer says. He is also quick to credit his colleagues in STS and in other programs for helping create the scholarly environment in which he is thriving.
“I came in with a really amazing cohort of other scholars in SHASS,” Deringer notes, referring to the MIT School of Humanities, Arts, and Social Sciences. He cites others receiving tenure in the last year such as his STS colleague Robin Scheffler, historian Megan Black, and historian Caley Horan, with whom Deringer has taught graduate classes on the concept of risk in history. In all, Deringer says, the Institute has been an excellent place for him to pursue interdisciplinary work on technical thought in history.
“I work on very old things and very technical things,” Deringer says. “But I’ve found a wonderful welcoming at MIT from people in different fields who light up when they hear what I’m interested in.”
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
The advancements in large language models have significantly accelerated the development of natural language processing, or NLP. The introduction of the transformer framework proved to be a milestone, facilitating the development of a new wave of language models, including OPT and BERT, which exhibit profound linguistic…
Three from MIT awarded 2024 Guggenheim Fellowships
MIT faculty members Roger Levy, Tracy Slatyer, and Martin Wainwright are among 188 scientists, artists, and scholars awarded 2024 fellowships from the John Simon Guggenheim Memorial Foundation. Working across 52 disciplines, the fellows were selected from almost 3,000 applicants for “prior career achievement and exceptional promise.”
Each fellow receives a monetary stipend to pursue independent work at the highest level. Since its founding in 1925, the Guggenheim Foundation has awarded over $400 million in fellowships to more than 19,000 fellows. This year, MIT professors were recognized in the categories of neuroscience, physics, and data science.
Roger Levy is a professor in the Department of Brain and Cognitive Sciences. Combining computational modeling of large datasets with psycholinguistic experimentation, his work furthers our understanding of the cognitive underpinning of language processing, and helps to design models and algorithms that will allow machines to process human language. He is a recipient of the Alfred P. Sloan Research Fellowship, the NSF Faculty Early Career Development (CAREER) Award, and a fellowship at the Center for Advanced Study in the Behavioral Sciences.
Tracy Slatyer is a professor in the Department of Physics as well as the Center for Theoretical Physics in the MIT Laboratory for Nuclear Science and the MIT Kavli Institute for Astrophysics and Space Research. Her research focuses on dark matter — novel theoretical models, predicting observable signals, and analysis of astrophysical and cosmological datasets. She was a co-discoverer of the giant gamma-ray structures known as the “Fermi Bubbles” erupting from the center of the Milky Way, for which she received the New Horizons in Physics Prize in 2021. She is also a recipient of a Simons Investigator Award and Presidential Early Career Awards for Scientists and Engineers.
Martin Wainwright is the Cecil H. Green Professor in Electrical Engineering and Computer Science and Mathematics, and affiliated with the Laboratory for Information and Decision Systems and Statistics and Data Science Center. He is interested in statistics, machine learning, information theory, and optimization. Wainwright has been recognized with an Alfred P. Sloan Foundation Fellowship, the Medallion Lectureship and Award from the Institute of Mathematical Statistics, and the COPSS Presidents’ Award from the Joint Statistical Societies. Wainwright has also co-authored books on graphical and statistical modeling, and solo-authored a book on high dimensional statistics.
“Humanity faces some profound existential challenges,” says Edward Hirsch, president of the foundation. “The Guggenheim Fellowship is a life-changing recognition. It’s a celebrated investment into the lives and careers of distinguished artists, scholars, scientists, writers and other cultural visionaries who are meeting these challenges head-on and generating new possibilities and pathways across the broader culture as they do so.”
Zero Trust strategies for navigating IoT/OT security challenges – CyberTalk
Travais ‘Tee’ Sookoo leverages his 25 years of experience in network security, risk management, and architecture to help businesses of all sizes, from startups to multi-nationals, improve their security posture. He has a proven track record of leading and collaborating with security teams and designing secure solutions for diverse industries.
Currently, Tee serves as a Security Engineer for Check Point, covering the Caribbean region. He advises clients on proactive risk mitigation strategies. He thrives on learning from every challenge and is always looking for ways to contribute to a strong cyber security culture within organizations.
In this informative interview, expert Travais Sookoo shares insights into why organizations need to adopt a zero trust strategy for IoT and how to do so effectively. Don’t miss this!
For our less technical readers, why would organizations want to implement zero trust for IoT systems? What is the value? What trends are you seeing?
For a moment, envision your organization as a bustling apartment building. There are tenants (users), deliveries (data), and of course, all sorts of fancy gadgets (IoT devices). In the old days, our threat prevention capabilities might have involved just a single key for the building’s front door (the network perimeter). Anyone with that key could access everything; the mailbox, deliveries, gadgets.
That’s how traditional security for some IoT systems worked. Once the key was obtained, anyone could gain access. With zero trust, instead of giving everyone the master key, the application of zero trust verifies each device and user ahead of provisioning access.
The world is getting more connected, and the number of IoT devices is exploding, meaning more potential security gaps. Organizations are realizing that zero trust is a proactive way to stay ahead of the curve and keep their data and systems safe.
Zero trust also enables organizations to satisfy many of their compliance requirements and to quickly adapt to ever-increasing industry regulations.
What challenges are organizations experiencing in implementing zero trust for IoT/OT systems?
While zero trust is a powerful security framework, the biggest hurdle I hear about is technology and personnel.
In terms of technology, the sheer number and variety of IoT devices can be overwhelming. Enforcing strong security measures with active monitoring across this diverse landscape is not an easy task. Additionally, many of these devices lack the processing power to run security or monitoring software, thus making traditional solutions impractical.
Furthermore, scaling zero trust to manage the identities and access controls for potentially hundreds, thousands, even millions of devices can be daunting.
Perhaps the biggest challenge is that business OT systems must prioritize uptime and reliability above all else. Implementing zero trust may require downtime or potentially introduce new points of failure. Finding ways to achieve zero trust without compromising the availability of critical systems takes some manoeuvring.
And now the people aspect: Implementing and maintaining a zero trust architecture requires specialized cyber security expertise, which many organizations may not have. The talent pool for these specialized roles can be limited, making it challenging to recruit and retain qualified personnel.
Additionally, zero trust can significantly change how people interact with OT systems. Organizations need to invest in training staff on new procedures and workflows to ensure a smooth transition.
Could you speak to the role of micro-segmentation in implementing zero trust for IoT/OT systems? How does it help limit lateral movement and reduce the attack surface?
With micro-segmentation, we create firewalls/access controls between zones, making it much harder for attackers to move around. We’re locking the doors between each room in the apartment; even if an attacker gets into the thermostat room (zone), they can’t easily access the room with our valuables (critical systems).
The fewer devices and systems that an attacker can potentially exploit, the better. Micro-segmentation reduces the overall attack surface and the potential blast radius by limiting what devices can access on the network.
Based on your research and experience, what are some best practices or lessons learned in implementing zero trust for IoT and OT systems that you can share with CISOs?
From discussions I’ve had and my research:
My top recommendation is to understand the device landscape. What are the assets you have, their purpose, how critical are they to the business? By knowing the environment, organizations can tailor zero trust policies to optimize both security and business continuity.
Don’t try to boil the ocean! Zero trust is a journey, not a destination. Start small, segmenting critical systems and data first. Learn from that experience and then expand the implementation to ensure greater success with declining margins of errors.
Legacy OT systems definitely throw a wrench into plans and can significantly slow adoption of zero trust. Explore how to integrate zero trust principles without compromising core functionalities. It might involve a mix of upgrades and workarounds.
The core principle of zero trust is granting only the minimum access required for a device or user to function (least privilege). Document who needs what and then implement granular access controls to minimize damage from a compromised device.
Continuous monitoring of network activity and device behaviour is essential to identify suspicious activity and potential breaches early on. Ensure that monitoring tools encompasses everything and your teams can expertly use it.
Automating tasks, such as device onboarding, access control enforcement, and security patching can significantly reduce the burden on security teams and improve overall efficiency.
Mandate regular review and policy updates based on new threats, business needs, and regulatory changes.
Securing IoT/OT systems also requires close collaboration between OT and IT teams. Foster teamwork, effective communications and understanding between these departments to break down silos. This cannot be stressed enough. Too often, the security team is the last to weigh in, often after it’s too late.
What role can automation play in implementing and maintaining Zero Trust for IoT/OT systems?
Zero trust relies on granting least privilege access. Automation allows us to enforce these granular controls by dynamically adjusting permissions based on device type, user role, and real-time context.
Adding new IoT devices can be a tedious process and more so if there are hundreds or thousands of these devices. However, automation can greatly streamline device discovery, initial configuration, and policy assignment tasks, thereby freeing up security teams to focus on more strategic initiatives.
Manually monitoring a complex network with numerous devices is overwhelming, but we can automate processes to continuously monitor network activity, device behaviour, and identify anomalies that might indicate a potential breach. And if a security incident occurs, we can automate tasks to isolate compromised devices, notifying security teams, and initiating remediation procedures.
Through monitoring, it’s possible to identify IoT/OT devices that require patching, which can be crucial, but also time-consuming. It’s possible to automate patch deployment with subsequent verification, and even launch rollbacks in case of unforeseen issues.
If this sounds as a sales pitch, then hopefully you’re sold. There’s no doubt that automation will significantly reduce the burden on security teams, improve the efficiency of zero trust implementation and greatly increase our overall security posture.
What metrics would you recommend for measuring the effectiveness of zero trust implementation in IoT and OT environments?
A core tenet of zero trust is limiting how attackers move between devices or otherwise engage in lateral movement. The number of attempted lateral movements detected and blocked can indicate the effectiveness of segmentation and access controls.
While some breaches are inevitable, a significant decrease in compromised devices after implementing zero trust signifies a positive impact. This metric should be tracked alongside the severity of breaches and the time it takes to identify and contain them. With zero trust, it is assumed any device or user, regardless of location, could be compromised.
The Mean Time to Detection (MTD) and Mean Time to Response (MTTR) are metrics that you can use to measure how quickly a security incident is identified and contained. Ideally, zero trust should lead to faster detection and response times, minimizing potential damage.
Zero trust policies enforces granular access controls. Tracking the number of least privilege violations (users or devices accessing unauthorized resources) can expose weaknesses in policy configuration or user behaviour and indicate areas for improvement.
Security hygiene posture goes beyond just devices. It includes factors like patch compliance rates, and the effectiveness of user access.
Remember the user experience? Tracking user satisfaction with the zero trust implementation process and ongoing security measures can help identify areas for improvement and ensure a balance between security and usability.
It’s important to remember that zero trust is a journey, not a destination. The goal is to continuously improve our security posture and make it more difficult for attackers to exploit vulnerabilities in our IoT/OT systems. Regularly review your metrics and adjust zero trust strategies as needed.
Is there anything else that you would like to share with the CyberTalk.org audience?
Absolutely! As we wrap up this conversation, I want to leave the CyberTalk.org audience with a few key takeaways concerning securing IoT and OT systems:
Zero trust is a proactive approach to security. By implementing zero trust principles, organizations can significantly reduce the risk of breaches and protect their critical infrastructure.
Don’t go it alone: Security is a team effort. Foster collaboration between IT, OT, and security teams to ensure that everyone is on the same page when it comes to adopting zero trust.
Keep learning: The cyber security landscape is constantly evolving. Stay up-to-date on the latest threats and best practices. Resources like Cybertalk.org are a fantastic place to start.
Focus on what matters: A successful zero trust implementation requires a focus on all three pillars: people, process, and technology. Security awareness training for employees, clearly defined policies and procedures, and the right security tools are all essential elements.
Help is on the way: Artificial intelligence and machine learning will play an increasingly important role in automating zero trust processes and making them even more effective.
Thank you, CyberTalk.org, for the opportunity to share my thoughts. For more zero trust insights, click here.
A musical life: Carlos Prieto ’59 in conversation and concert
World-renowned cellist Carlos Prieto ’59 returned to campus for an event to perform and to discuss his new memoir, “Mi Vida Musical.”
At the April 9 event in the Samberg Conference Center, Prieto spoke about his formative years at MIT and his subsequent career as a professional cellist. The talk was followed by performances of J.S. Bach’s “Cello Suite No. 3” and Eugenio “Toussaint’s Bachriation.” Valerie Chen, a 2022 Sudler Prize winner and Emerson/Harris Fellow, also performed Phillip Glass’s “Orbit.”
Prieto was born in Mexico City and began studying the cello when he was 4. He graduated from MIT with BS degrees in 1959 in Course 3, then called the Metallurgical Engineering and today Materials Science and Engineering, and in Course 14 (Economics). He was the first cello and soloist of the MIT Symphony Orchestra. While at MIT, he took all available courses in Russian, which allowed him, years later, to study at Lomonosov University in Moscow.
After graduation from MIT, Prieto returned to Mexico, where he rose to become the head of an integrated iron and steel company.
“When I returned to Mexico, I was very active in my business life, but I was also very active in my music life,” he told the audience. “And at one moment, the music overcame all the other activities and I left my business activities to devote all my time to the cello and I’ve been doing this for the past 50 years.”
During his musical career, Prieto played all over the world and has played and recorded the world premieres of 115 compositions, most of which were written for him. He is the author of 14 books, some of which have been translated into English, Russian, and Portuguese.
Prieto’s honors include the Order of the Arts and Letters from France, the Order of Civil Merit from the King of Spain, and the National Prize for Arts and Sciences from the president of Mexico. In 1993 he was appointed member of the MIT Music and Theater Advisory Committee. In 2014, the School of Humanities, Arts, and Social Sciences awarded Prieto the Robert A. Muh Alumni Award.
TopSpin 2K25 Review – A Strong Return – Game Informer
In the heyday of the tennis-sim video game genre, Top Spin and Virtua Tennis were the best players in the crowded space. However, in the time since the genre’s boom settled, the offerings have fallen off considerably, with both franchises going more than a decade without a new release. TopSpin 2K25 signals the reemergence of the critically acclaimed series, and though it’s been a while since it stepped on the court, it’s evident the franchise hasn’t lost its stroke.
TopSpin 2K25 faithfully recreates the high-speed chess game of real-world tennis. Positioning, spin, timing, and angles are critical to your success. For those unfamiliar with those fundamental tennis tenets, 2K25 does a superb job of onboarding players with TopSpin Academy, which covers everything from where you should stand to how to play different styles. Even as someone who played years of tennis in both real life and video games, I enjoyed going through the more advanced lessons to refamiliarize myself with the various strategies at play.
Once on the court, you learn how crucial those tactics are. The margin of error is extremely thin, as the difference between a winner down the baseline and a shot into the net is often a split-second on the new timing meter. This meter ensures you release the stroke button timed with when the ball is in the ideal striking position relative to your player. Mastering this is pivotal, as it not only improves your shot accuracy but also your power.
[embedded content]
TopSpin 2K25 is at its best when you’re in sustained rallies against an evenly-matched opponent. Getting off a strong serve to immediately puts your opponent on the defensive, then trying to capitalize on their poor positioning as they struggle to claw back into the point effectively captures the thrill of the real-world game. I also love how distinct each play style feels in action; an offensive baseline player like Serena Williams presents different challenges than a serve-and-volleyer like John McEnroe.
You can hone your skills in one-off exhibition matches, but I spent most of my time in TopSpin 2K25 in MyCareer. Here, you create your player, with whom you’ll train and climb the ranks. As you complete challenges and win matches, you raise your status, which opens new features like upgradeable coaches, equippable skills, and purchasable homes to alleviate the stamina drain from travel. Managing your stamina by sometimes resting is essential to sustain high-level play; pushing yourself too hard can even cause your player to suffer injuries that sideline you for months.
I loved most of my time in MyCareer, but some design decisions ruined the immersion. For example, I ignored portions of the career goals necessary to rank up my player for hours, so while I was in the top 10 global rankings, I was unable to participate in a Grand Slam because I was still at a lower status than my ranking would typically confer. And since repetition is the path to mastery, it’s counterintuitive that repeated training minigames award fewer benefits, particularly since the mode as a whole is a repetitive loop of training, special events, and tournaments. Additionally, MyCareer shines a light on the shallow pool of licensed players on offer. Most of my matches were against created characters, even hours deep. 2K has promised free licensed pros in the post-launch phase, but for now, the game is missing multiple top players.
I’m pleasantly surprised by how unintrusive the use of VC is. In the NBA 2K series, VC, which can be earned slowly or bought using real money, is used to directly improve your player. In TopSpin 2K25, it’s used primarily for side upgrades, like leveling up your coach, relocating your home, earning XP boosts, resetting your attribute distribution, or purchasing cosmetics. Though I’m still not a fan of microtransactions affecting a single-player mode – particularly since it’s almost certainly why you need to be online to play MyCareer – it’s much more palatable than its NBA contemporary.
If you’d rather play against real opponents, you can show off your skills (and your created character) in multiple online modes. World Tour pits your created player against others across the globe in various tournaments and leaderboard challenges, while 2K Tour leverages the roster of licensed players with daily challenges to take on. Outside of minor connection hiccups, I had an enjoyable time tackling the challenges presented by other players online. However, World Tour’s structure means that despite the game’s best efforts, mismatches occur; it’s no fun to play against a created character multiple levels higher than you. Thankfully, these mismatches were the outlier rather than the exception in my experience.
TopSpin 2K25 aptly brings the beloved franchise back to center court, showing that not only does the series still have legs, but so does the sim-tennis genre as a whole. Though its modes are somewhat repetitive and it’s missing several high-profile pros at launch, TopSpin 2K25 serves up a compelling package for tennis fans.
1000 AI-powered machines: Vision AI on an industrial scale
Take a deep dive into the world of Vision AI deployment at scale. From initial failures to game-changing successes, uncover the key lessons from deploying over 1000 AI machines across industries like agriculture and manufacturing, and learn the principles behind successful AI implementation….
A snapshot of bias, the human mind, and AI
Understanding human bias, AI systems, and leadership challenges in technology management and their impacts on decision-making….
Blizzard Announces It’s Skipping BlizzCon This Year
Blizzard has decided to cancel this year’s BlizzCon. The company states the event will return in the future, but it plans to showcase upcoming games in a different manner over the coming months.
First announced in a blog post, Blizzard plans to share details on upcoming games like World of Warcraft: The War Within and Diablo IV’s Vessel of Hatred expansion at other trade shows, such as Gamescom. The company also plans to launch “multiple, global, in-person events” for Warcraft’s 30th anniversary, which are described as being “distinct” from BlizzCon.
“Our hope is that these experiences – alongside several live-streamed industry events where we’ll keep you up to date with what’s happening in our game universes – will capture the essence of what makes the Blizzard community so special,” Blizzard states in the blog post.
A Blizzard representative tells Windows Central that Blizzard made the call to cancel BlizzCon and not Microsoft, which completed its acquisition of the company last year. In a statement to the outlet, the representative says, “This is a Blizzard decision. We have explored different event formats in the past, and this isn’t the first time we’re skipping BlizzCon or trying something new. While we have great things to share in 2024, the timing just doesn’t line up for one single event at the end of the year.”
BlizzCon began in 2005 as an annual convention celebrating all things Blizzard. Last year’s show saw the reveal of World of Warcraft’s next expansion, The War Within, as well as two other expansions coming after it. It’s good that event is only taking a year off as opposed to being canned for good, and we’re curious to see how the alternative events shape up over the coming months.