5 Best AI SOP (Standard Operating Procedures) Generators – 2024

In today’s fast-paced business world, creating and maintaining standard operating procedures (SOPs) is crucial for ensuring consistency, efficiency, and quality across an organization. However, the process of creating SOPs can be time-consuming and tedious. This is where AI-powered SOP generators come into play, improving the way…

Fredrik Danielsson, Principal Product Manager at Tiny – Interview Series

Fredrik Danielsson, principal product manager of TinyMCE, an enterprise-grade WYSIWYG rich text editing component. Fredrik is an ardent software and web app designer turned product manager, who revels in the detail. With 20+ years’ experience working across web design, UX/UI, design, marketing and software development, he specializes in web apps and services that serve the…

SIMA: Scaling Up AI Agents Across Virtual Worlds for Diverse Applications

Amidst swift advancements in Artificial Intelligence (AI), Google DeepMind’s Scalable Instructable Multiworld Agent (SIMA) represents a substantial advancement. This innovative AI agent is engineered to perform tasks within many 3D virtual environments, demonstrating exceptional adaptability and learning capabilities like human cognition. The emergence of AI agents…

Large Action Models (LAMs): The Next Frontier in AI-Powered Interaction

Almost a year ago, Mustafa Suleyman, co-founder of DeepMind, predicted that the era of generative AI would soon give way to something more interactive: systems capable of performing tasks by interacting with software applications and human resources. Today, we’re beginning to see this vision take shape…

Lalal.ai Review: The #1 AI Background Noise Remover?

Removing unwanted noise from audio tracks has always been challenging for content creators, musicians, and producers. With Lalal.ai, that’s no longer the case! Lalal.ai enhances audio with its AI stem splitter, background noise remover, and voice changer. The AI stem splitter seamlessly separates vocals and specific…

MaxDiff RL Algorithm Improves Robotic Learning with “Designed Randomness”

In a groundbreaking development, engineers at Northwestern University have created a new AI algorithm that promises to transform the field of smart robotics. The algorithm, named Maximum Diffusion Reinforcement Learning (MaxDiff RL), is designed to help robots learn complex skills rapidly and reliably, potentially revolutionizing the…

ScalaHosting Review: The Best High-performance Host for Your Website?

As a website manager and digital marketer, I’ve had to evaluate diverse hosting providers to find the perfect fit for my client’s websites. And in my experience, I have discovered that it’s rare to find a reliable web host that offers fully managed, robust cloud hosting…

President Sally Kornbluth and OpenAI CEO Sam Altman discuss the future of AI

President Sally Kornbluth and OpenAI CEO Sam Altman discuss the future of AI

How is the field of artificial intelligence evolving and what does it mean for the future of work, education, and humanity? MIT President Sally Kornbluth and OpenAI CEO Sam Altman covered all that and more in a wide-ranging discussion on MIT’s campus May 2.

The success of OpenAI’s ChatGPT large language models has helped spur a wave of investment and innovation in the field of artificial intelligence. ChatGPT-3.5 became the fastest-growing consumer software application in history after its release at the end of 2022, with hundreds of millions of people using the tool. Since then, OpenAI has also demonstrated AI-driven image-, audio-, and video-generation products and partnered with Microsoft.

The event, which took place in a packed Kresge Auditorium, captured the excitement of the moment around AI, with an eye toward what’s next.

“I think most of us remember the first time we saw ChatGPT and were like, ‘Oh my god, that is so cool!’” Kornbluth said. “Now we’re trying to figure out what the next generation of all this is going to be.”

For his part, Altman welcomes the high expectations around his company and the field of artificial intelligence more broadly.

“I think it’s awesome that for two weeks, everybody was freaking out about ChatGPT-4, and then by the third week, everyone was like, ‘Come on, where’s GPT-5?’” Altman said. “I think that says something legitimately great about human expectation and striving and why we all have to [be working to] make things better.”

The problems with AI

Early on in their discussion, Kornbluth and Altman discussed the many ethical dilemmas posed by AI.

“I think we’ve made surprisingly good progress around how to align a system around a set of values,” Altman said. “As much as people like to say ‘You can’t use these things because they’re spewing toxic waste all the time,’ GPT-4 behaves kind of the way you want it to, and we’re able to get it to follow a given set of values, not perfectly well, but better than I expected by this point.”

Altman also pointed out that people don’t agree on exactly how an AI system should behave in many situations, complicating efforts to create a universal code of conduct.

“How do we decide what values a system should have?” Altman asked. “How do we decide what a system should do? How much does society define boundaries versus trusting the user with these tools? Not everyone will use them the way we like, but that’s just kind of the case with tools. I think it’s important to give people a lot of control … but there are some things a system just shouldn’t do, and we’ll have to collectively negotiate what those are.”

Kornbluth agreed doing things like eradicating bias in AI systems will be difficult.

“It’s interesting to think about whether or not we can make models less biased than we are as human beings,” she said.

Kornbluth also brought up privacy concerns associated with the vast amounts of data needed to train today’s large language models. Altman said society has been grappling with those concerns since the dawn of the internet, but AI is making such considerations more complex and higher-stakes. He also sees entirely new questions raised by the prospect of powerful AI systems.

“How are we going to navigate the privacy versus utility versus safety tradeoffs?” Altman asked. “Where we all individually decide to set those tradeoffs, and the advantages that will be possible if someone lets the system be trained on their entire life, is a new thing for society to navigate. I don’t know what the answers will be.”

For both privacy and energy consumption concerns surrounding AI, Altman said he believes progress in future versions of AI models will help.

“What we want out of GPT-5 or 6 or whatever is for it to be the best reasoning engine possible,” Altman said. “It is true that right now, the only way we’re able to do that is by training it on tons and tons of data. In that process, it’s learning something about how to do very, very limited reasoning or cognition or whatever you want to call it. But the fact that it can memorize data, or the fact that it’s storing data at all in its parameter space, I think we’ll look back and say, ‘That was kind of a weird waste of resources.’ I assume at some point, we’ll figure out how to separate the reasoning engine from the need for tons of data or storing the data in [the model], and be able to treat them as separate things.”

Kornbluth also asked about how AI might lead to job displacement.

“One of the things that annoys me most about people who work on AI is when they stand up with a straight face and say, ‘This will never cause any job elimination. This is just an additive thing. This is just all going to be great,’” Altman said. “This is going to eliminate a lot of current jobs, and this is going to change the way that a lot of current jobs function, and this is going to create entirely new jobs. That always happens with technology.”

The promise of AI

Altman believes progress in AI will make grappling with all of the field’s current problems worth it.

“If we spent 1 percent of the world’s electricity training a powerful AI, and that AI helped us figure out how to get to non-carbon-based energy or make deep carbon capture better, that would be a massive win,” Altman said.

He also said the application of AI he’s most interested in is scientific discovery.

“I believe [scientific discovery] is the core engine of human progress and that it is the only way we drive sustainable economic growth,” Altman said. “People aren’t content with GPT-4. They want things to get better. Everyone wants life more and better and faster, and science is how we get there.”

Kornbluth also asked Altman for his advice for students thinking about their careers. He urged students not to limit themselves.

“The most important lesson to learn early on in your career is that you can kind of figure anything out, and no one has all of the answers when they start out,” Altman said. “You just sort of stumble your way through, have a fast iteration speed, and try to drift toward the most interesting problems to you, and be around the most impressive people and have this trust that you’ll successfully iterate to the right thing. … You can do more than you think, faster than you think.”

The advice was part of a broader message Altman had about staying optimistic and working to create a better future.

“The way we are teaching our young people that the world is totally screwed and that it’s hopeless to try to solve problems, that all we can do is sit in our bedrooms in the dark and think about how awful we are, is a really deeply unproductive streak,” Altman said. “I hope MIT is different than a lot of other college campuses. I assume it is. But you all need to make it part of your life mission to fight against this. Prosperity, abundance, a better life next year, a better life for our children. That is the only path forward. That is the only way to have a functioning society … and the anti-progress streak, the anti ‘people deserve a great life’ streak, is something I hope you all fight against.”