People say that Silicon Valley has matured beyond the hotheaded mindset of “move fast, break things, then fix them later,” and that companies have adopted a slower, more responsible approach to building the future of our industry.
Unfortunately, current trends tell a different story.
Despite the lip service, the way companies build things has yet to actually change. Tech startups are still running on the same code of shortcuts and false promises, and the declining quality of products shows it. “Move fast and break things” is very much still Silicon Valley’s creed – and, even if it truly had died, the AI boom has reanimated it in full force.
Recent advancements in AI are already radically transforming the way we work and live. In just the last couple of years, AI has gone from the domain of computer science professionals to a household tool thanks to the rapid proliferation of generative AI tools like ChatGPT. If tech companies “move fast and break things” with AI, there may be no option to “fix them later”, especially when models are trained on sensitive personal data. You can’t unring that bell, and the echo will reverberate throughout society, potentially causing irreparable harm. From malicious deepfakes to fraud schemes to disinformation campaigns, we’re already seeing the negative side of AI come to light.
At the same time, though, this technology has the power to change our society for the better. Enterprise adoption of AI will be as revolutionary as the move to the cloud was; companies will completely rebuild on AI, and they will become infinitely more productive and efficient because of it. On an individual level, generative AI will become our trusted assistant, helping us to complete everyday activities, experiment creatively and unlock new knowledge and opportunities.
The AI future can be a bright one, but it requires a major cultural shift in the place where that future is being built.
Why “Move Fast and Break Things” is Incompatible with AI
“Move fast and break things” operates on two major assumptions: one, that anything that doesn’t work at launch can be patched in a later update; and two, that if you “break things,” it can lead to breakthroughs with enough creative coding and outside-the-box thinking. And while plenty of great innovations have come out of mistakes, this isn’t penicillin or Coca-Cola. Artificial intelligence is an extraordinarily powerful technology that must be handled with the utmost caution. The risks of data breaches and criminal misuse are simply too high to ignore.
Unfortunately, Silicon Valley has a bad habit of glorifying the messiness of the development process. Companies still promote a ceaseless grind, wherein long hours and a lack of work-life balance become necessary to make a career. Startups and their shareholders set unrealistic goals that increase the risk of errors and corner-cutting. Boundaries are pushed when, maybe, they shouldn’t be. These behaviors coalesce into a toxic industry culture that encourages hype-chasing at the expense of ethics.
The current pace of AI development cannot continue within this culture. If AI is going to solve some of the world’s most pressing problems, it will have to train on highly sensitive information, and companies have a critical responsibility to protect that information.
Safeguards take time to implement, and time is something Silicon Valley is thoroughly convinced it doesn’t have. Already, we’re seeing AI companies forgoing necessary guardrails for the sake of pumping out new products. This might satisfy shareholders in the short term, but the long-term risks set these organizations up for massive financial harm down the road – not to mention a complete collapse of any goodwill they’ve fostered.
There is also a serious risk associated with IP and copyright infringement, as evidenced by the various federal lawsuits in play involving AI and copyright. Without proper protections against copyright infringement and IP violations, people’s livelihoods are at risk.
To the AI startup that wants to blitz through development and go to market, this seems like a lot to account for – and it is. Protecting people and information takes hard work. But it’s non-negotiable work, even if it forces AI developers to be more thoughtful. In fact, I’d argue that’s the benefit. Build solutions to problems before they arise, and you won’t have to fix whatever breaks down the road.
A New Creed: “Move Strategically to Be Unbreakable”
This past May, the EU approved the world’s first comprehensive AI law, the Artificial Intelligence Act, to manage risk through extensive transparency requirements and the outright banning of AI technologies deemed an unacceptable risk. The law reflects the EU’s historically cautious approach to new technology, which has governed its AI development strategies since the first sparks of the current boom. Instead of acting on a whim, steering all their venture dollars and engineering capabilities into the latest trend without proper planning, these companies sink their efforts into creating something that will last.
This is not the prevailing approach in the US, despite numerous attempts at regulation. On the legislative front, individual states are largely proposing their own laws, ranging from woefully inadequate to massively overreaching, such as California’s proposed SB-1047. All the while, the AI arms race intensifies, and Silicon Valley persists in its old ways.
Venture capitalists are only inflaming the problem. When investing in new startups, they’re not asking about guardrails and safety checks. They want to get a minimum viable product out as fast as possible so they can collect their checks. Silicon Valley has become a breeding ground for get-rich-quick schemes, where people want to make as much money as they can, in as little time as possible, while doing as little work as possible – and they don’t care about the consequences.
For the age of AI, I’d like to propose a replacement for “move fast and break things”: move strategically to be unbreakable. It might not have the same poetic verve as the former, but it does reflect the mindset SV needs in today’s technological landscape.
I’m optimistic the technology industry can be better, and it starts with adopting a customer-centric, future-oriented mindset focused on creating products that last and maintaining those products in a way that fosters trust with users. A more mindful approach will make people and organizations feel confident about bringing AI into their lives – and that sounds pretty profitable to me.
Toward a Sustainable Future
The tech world suffers from overwhelming pressure to be first. Founders feel that if they don’t jump on the next big thing right away, they’re going to miss the boat. Of course, being an early mover may increase your chances of success but being “first” shouldn’t come at the expense of safety and ethics.
When your goal is to build something that lasts, you’ll end up looking more thoroughly for risks and weaknesses. This is also how you find new opportunities for breakthroughs and innovation. The companies that can transform strengths into weaknesses are the ones that can solve tomorrow’s challenges, today.
The hype is real, and the new era of AI is worthy of it. But in our excitement to unlock the power of this technology, we cannot forgo the necessary safeguards that will make these products reliable and trustworthy. AI promises to improve our lives for the better, but it can also cause immeasurable harm if security and safety aren’t core to the development process.
For Silicon Valley, this should be a wake-up call: it’s time to leave the mentality of “move fast, break things, then fix them later” behind. Because there is no “later” when the future is now.