The Sequence Chat #475: Ed Sim, Forbes Top Tech Investor, on AI Investing, Security, Agents and More

Founder of boldstart ventures and widely recognized one of the best early stage investors in the world, Ed shares his perspectives about the AI space.

The Sequence Chat is our series of interviews with top AI thought leaders and practitioners. We dive deep and don’t pull any punches 😉.

Today, we have another extra special interview. Ed Sim, the founder of boldstart ventures, is widely regarded as one of the best early-stage VCs in the world growing bodtart from $1M to over $800M. I’ve learned a lot from Ed over the years, and he has graciously agreed to share some of his thoughts about the AI space with us.”

You can subscribe to The Sequence below:

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Let’s dive in:

Welcome to TheSequence. Could you start by telling us a bit about yourself? Share your background, current role, and how you got started in venture investing and AI.

Hi, my name is Ed Sim, and I’m the founder of boldstart ventures which we started in 2010. I’m on year 29 investing in technical founders reimagining the enterprise. I’ve seen and lived through a number of cycles from the Internet boom in the late 90s to the financial crisis in 2008 and now, with GenAI, the greatest platform shift that I believe I will ever see. Some of the founders I’ve been fortunate to partner with from Inception have started and built companies like Snyk (developer security), BigID (data security and privacy), Protect AI (AI security), Tessl (AI native software development platform), Superhuman, Kustomer (AI support), Blockdaemon, Front and many more.

I began investing in “AI” back in 2010, when the focus was on rules-based machine learning, and later rode the first wave of Robotic Process Automation (RPA) in 2017 (check out my post on RPA & F500). My venture investing journey started at JPMorgan, where I worked as an investment analyst building quantitative trading models using historical risk-based pricing data and Excel. While my role was essentially that of a data analyst, curiosity led my colleague and me to start recording macros to automate our tasks. This sparked a deeper interest, and we taught ourselves Visual Basic to push the boundaries of what we could achieve.

Within months, we were requesting the latest Pentium machines to handle larger computations, and I downloaded the Mosaic browser, diving into the early days of the internet. In 1996, I joined a VC fund in New York, marking the beginning of what has now been a super fun, 29-year adventure in venture investing and AI.

If you’re curious about what’s on my mind, I publish a weekly newsletter called What’s 🔥in IT/VC, where I share the latest trends and insights from venture capital, startups, AI, and security. I also delve into company building, raising capital, hiring, and navigating exits. As a huge fan of The Sequence, I’m excited to share my thoughts with all of you!

🛠 AI Work

You coined the term Inception Investing. Can you define what it means and explain how it aligns with the current dynamics of company building in generative AI?

Inception Investing is all about collaborating with founders before they even incorporate; helping them accelerate their ideation process, pre-selling the first hires, and leading that initial round of funding upon incorporation. This is not pre-seed or seed or any of that mumbo jumbo – it’s just straight out backing people who are highly technical who have a unique insight into what to build with a track record of having done so in the past. It can mean first time or third time founders. What’s unique is that an Inception round is unbounded by size. This, by the way, is super important because definitions of firms like pre-seed really imply smaller rounds and seed, slightly larger, and multistage, unbounded. However, founders just need one place to go to when they start a company, no matter the round size!

As you know, in the world of GenAI and because funds have too much money, an Inception round can be as big as $100M+ for experienced, in-demand founders. That being said, we meet founders who are thinking in two ways – either raise as little as possible and see where it goes like a CrewAI (initial round of only $2m) or raise a significantly larger amount like Tessl (initial round of $25M) for Guy Podjarny’s third company (Snyk valued at over $7B). When it comes to GenAI, you won’t see boldstart chasing those $100M rounds, but we have backed 2 stealth companies building specific foundational models with super experienced teams (Deepmind, Boston Dynamics…) and an ability to get/create proprietary data in robotics hand dexterity and bio research. Initial rounds were in the $10M range and the idea is to prove it before raising the mega >$50M next round.

The rapid growth of generative AI has redefined traditional fundraising trends, with seed rounds often reaching hundreds of millions of dollars. How has this shift impacted company-building and go-to-market strategies for early-stage startups?

Nail it, then scale it!

The amount of money you have should never change how you build your startup. When you receive your first dollar, the only goal is to build the best product possible which means having the right vision and initial engineering and product team to do so. Anything else is a distraction. No matter how much money a founder raises whether it’s $100k or $100M has to go through this same process – build a product and discover what product market fit is. You can’t spend your way to product market fit, nor can you skip steps.

For some the definition of what the minimum valuable product is can be different – some want to train their own model which is super expensive and can cost ten to hundreds of millions of dollars while others like a CrewAI can iterate with a small team before becoming one of the leading multi-agent frameworks out there, running >1M multiagent crews a day!

Either way, build a product, get to PMF, and do it as efficiently as possible. From first hand experience I have founders who have raised >$100M who still have a lean burn getting through the first stage and ready to ramp up spending when they get to PMF. Just because you have the money doesn’t mean you need to spend it; in fact, the more people you hire the slower you will go, so be super careful about ramping up too quickly.

Finally, because of all of the dollars flowing into AI-related startups and depending on the market they are going after, founders and investors do feel the bar to attract and pay for talent requires more capital. And while not thrilled about some of these massive rounds, I can concur that for some of these startups, there is no other way than to start with a war chest of dollars.

Regardless of your approach, my only advice is that too much money removes constraints, and the best founders are the most resourceful and creative when their backs are against the wall. If you have a large cash war chest, manufacture ways in your mind to make it feel like you don’t!

The initial wave of generative AI investments focused on “GPU-rich” companies that required billions of dollars upfront to experiment with their models. Many of these companies struggled to gain meaningful traction, leading to pivots or acquisitions. How has this affected venture capital perspectives on the AI space? Are we moving away from the GPU-centric approach?

Well, those opportunities sucked up lots of money and many did not end well. Goes back to my point above about too much cash. We are way beyond the “let’s build our own general purpose” model phase now. We know who the leaders are in the general purpose LLM game and investors and founders are seeing that value is accruing up the stack where companies are much more capital efficient and can deliver value to the end user. It’s portfolio companies like Clay or Superhuman which are using OpenAI and Anthropic but then building their own twist for outbound data enrichment or email to grow insanely fast. It’s companies like Anysphere, creators of Cursor, and other AI-native software companies who are growing rapidly that are attracting the next big dollars in venture.

Finally, I still believe that the last mile in the enterprise is the longest mile. There is so much to get right besides choosing what model to use – how do you make sure only the right data can be seen by the right user, how do you make sure the right prompts are used to get the best answer, how do you remove hallucinations, how do you deliver on-premise…you get the idea. Investors and founders are also building vertically focused-AI companies whether in finance, law or HR – no industry is immune as this GenAI wave is bigger than just SaaS.

If you believe that software ate the world and AI is eating software then you have to ask the question if GenAI will eat into labor. Because if you believe that last point, the opportunity to transform labor markets and capture some of those dollars is in the multi-trillions – this is the opportunity we are all chasing.

You’ve been a successful investor in enterprise AI security and have emphasized that “there is no AI without AI security.” How do traditional cybersecurity practices and techniques need to evolve in the era of generative AI?

We can hit this from lots of different ways. First, hackers are always one of the first adopters of technology and of course, they are doubling down on AI. It’s always easier to attack than it is to defend. Cybersecurity practitioners need to double down on threats generated by AI which is especially great at social engineering like emails, fake voice and video calls. In addition, GenAI allows hackers to easily send these messages at scale and also do other things like probe networks, find new software vulnerabilities, and even find new ways to hack into systems.

Secondly, we need to think about second order effects using AI. The more code that is written by AI the more one needs to analyze to secure that code. Companies like Snyk in our portfolio scan and secure code as AI writes it. The basics like LLM prompt injection attacks are getting taken care of, but the next wave is agents – who’s going to make sure agents are acting against the right policies and who’s going to provide the infrastructure for agents to authenticate and validate who they are.

Finally, any AI models being used in an enterprise and embedded in applications open up opportunities for hackers to exploit. If you think of the SBOM or software bill of materials then we have the AIBOM which is the AI Bill of Materials which is even more complex as it not only includes software but also data and the model itself. For example, one of our portfolio cos where I’m on the board, ProtectAI, has a partnership with Hugging Face to scan all of the open source models for vulnerabilities.

At the end of the day, cybersecurity professionals need to look at the threats from GenAI holistically from the network to the software to the data to the people and ultimately use AI to keep up with all of the AI threats coming at scale! Many of the smartest CISOs I know are already tinkering with agents to automate some of their response workflow, and I expect this to become more prevalent in the next couple of years so these cybersecurity teams can continue to scale while hiring less.

Enterprise security is known for being a challenging market with high costs, long sales cycles, and significant resistance to change. Is AI security innovative enough for new startups to disrupt the market, or will the incumbents continue to dominate?

Many of our best Inception investments in cybersecurity were made in founders who could see the future and anticipate new attack vectors that hackers could exploit. Despite how long and hard it can be to sell into these organizations, every CISO has some discretionary budget to spend on new threats. In my 29 years as an enterprise VC, I’ve never seen a category, if you will, explode in interest as fast as AI security. One caveat is that the idea of AI security is super broad and for these purposes I want to limit it to securing AI usage in the enterprise. This covers the SaaS apps that employees use, the models and software that enterprises build and deploy, and of course the data security and privacy around it.

As I lay this out you can already see the breadth and depth needed to be an AI security vendor, and I can promise you that no incumbent vendor other than Microsoft has the understanding to cover every category. Because of that, you are seeing niche startup vendors play in LLM and prompt injection security, network security, AI model security from offensive red teaming to understanding the AIBOM as mentioned above, data privacy, open source security, agent security… you get the picture. Sure some incumbents like Cisco bought Robust Intelligence and actually released something interesting called Cisco AI Defense. And of course we have Protect AI, a boldstart portfolio company, which is the only pure play startup that covers the full gamut of AI security from what developers build to what AI engineers use for models to what employees use and finally the AI-SPM dashboard for CISOs. In my opinion, this category will rapidly consolidate as niche vendors either get swallowed up or killed by incumbents who have a large installed base of customers while a select few startup vendors will reach escape velocity as one of the standalone new platform plays for AI Security.

Boldstart has been active in the AI agent space with investments like Crew AI. What are your views on AI agents, and how can companies succeed in such a hyper-fragmented market?

Yes, we’ve been fortunate to have a front row seat in the agent space as we led CrewAI’s Inception round and have watched the team scale to over 1 million multiagent crews run per day. The growth has been simply astounding. More importantly, CrewAI is not just a platform to build multi-agent teams but also to build and orchestrate agentic workflows needed to solve business problems. This requires offering a developer an agent system that autonomously performs tasks with minimal human intervention by observing, planning, acting, learning and repeating. CrewAI helps developers easily build these teams and workflows, and I feel like we are just in the first inning of a long game.

There are tons of competitors appearing every single day from established vendors like Microsoft and Google to a number of new startups. I don’t know how this will all shake out but what I do know is that winning requires a founder with a killer long-term vision, an amazing product which developers love as winning the hearts ❤️ and minds 🧠 is what’s required for success, speed of execution, and finally building out an ecosystem of partners who use your framework as one of their core offerings. Fortunately CrewAI hits the bullseye 🎯on all fronts and was just announced as a key Nvidia partner at CES by Jensen Huang as he laid out his vision of the agentic future. Other partners include Cloudera and IBM with more to come.

In the long run, I also believe we will live in a world where each of us will have hundreds of agents doing work for us around the clock and if you extrapolate the second order effects, we can easily conclude that a whole new infrastructure will be needed to support hundreds of thousands of agents doing work at enterprises. Think about runtime ephemeral authentication and access, observability for all of these heterogeneous agents, interoperability for these agents to talk to one another, security policies on what is allowed and not, and governance. These are all areas we at boldstart are investing in or have already invested in 😄with companies in stealth.

In the AI agent space, which segment is better positioned for short- to medium-term success: vertical, domain-specific agents or horizontal, agentic platforms?

In the long run we will all win if agents get smarter, hallucinate less, and are easily programmable and monitored. We have to remember that this enterprise AI wave is MUCH BIGGER THAN JUST SOFTWARE. AI is Eating Software but will AI also EAT LABOR? We are not just talking about reallocating existing SaaS dollars. AI’s impact on worker productivity could be an additional $6-8 Trillion, and if vendors can capture some of that value that is enormous potential.

When it comes to who wins over time, there will be winners in every category from vertical to domain to agentic platforms. We’re investing in all of these areas from the infrastructure to more domain experts building armies of agents to help security professionals detect and respond faster to security threats to automating customer service to … In the end, it’s super easy to start a company now but hard to build multi-hundred million revenue businesses. For every 500 companies that get funded, maybe 1-2 reach escape velocity. The amount of companies that don’t make it will be higher than ever, but for the companies that do survive, they will create far more value than all of the money lost. That’s the opportunity!

boldstart has had remarkable success in the SaaS space. Do you think traditional SaaS will be replaced by AI-driven agentic experiences? Are we transitioning from form-based interfaces to multimodal agent experiences?

Once again I don’t think this is an all or nothing proposition. It’s pretty clear that if agents proliferate like we expect them to then it will reduce the amount of seats in an enterprise and will cannibalize seat-based pricing which is the de facto business model for existing SaaS applications. Depending on the function, we will eventually have outcome-based pricing, no doubt. In fact, one of our boldstart portfolio companies, Kustomer, was one of the first customers to offer 100% outcome based pricing which you can read about here. When we did the math, it ate into short-term revenue marginally while future proofing us for the long term. The customers frankly love it.

Satya Nadella from Microsoft was recently interviewed and said that “the notion that business applications exist, that’s will probably all collapse in the agent area. They are essentially CRUD databases w/biz logic. All logic going to Agents which are multi-repo, multi-vendor. When logic moves to an agent, then people will replace back-ends. Seeing high win rates for Dynamics with agents.”

I agree with him 100% but my question is how long will this take?

Besides agents accessing applications or just the raw data, I 100% believe we will have other interfaces like voice take off or even using your camera to help field workers get work done. Chat based interfaces are nice, but we will see that this will eventually become a relic in the years to come as machines talk to more machines and we have other ways of accessing data for systems. I’m also interested in the idea of dynamic UIs for each user or role or based on the context or location where the application and what you see may change every single login, intuiting what you want to see and just giving you the answer.

The next few years is going to be interesting to watch MSFT, Salesforce, ServiceNow and others protect their turf, offering their own agentic workflows, while startups build from a clean slate with no app or dog 🐶 in the hunt. Regardless, the opportunity to reshuffle the decks in the years to come is mind blowing! Hundreds of billions of dollars of SaaS revenue are at stake and then there will be trillions of dollars of opportunity if these agents eat into the labor market!

💥 Miscellaneous – a set of rapid-fire questions

What’s your favorite area of generative AI outside of security and agents?

I feel like a kid in a candy shop at the moment. I’m just enjoying playing around with all of the new personal tech coming around and at the moment Google Notebook LM is giving me insane super powers allowing me to easily research and understand research papers, documents, and synthesize podcasts and turn all of that into a podcast with Q&A – just so much fun.

Do you think we’ll achieve AGI through transformers and scaling laws, or will it require entirely new architectures?

That’s above my pay grade 🤣, and I’ll let the scientists decide. But at the moment, it seems to me we still have some room to grow using transformers by leveraging agentic reasoning and scaling test time compute. We also have other new promising areas of research like Google Deepmind’s Titan for long-term memory to deliver even better answers faster. That being said, all technology eventually gets displaced!

What advice would you give to founders starting in the generative AI space? What’s the most common mistake you’ve seen founders make in AI companies?

Think in first principles – focus on the problem you are solving and for whom, how are you uniquely solving that problem to make an end user’s life orders of magnitude better with your product or service than without, and then think about the AI last. If you think about AI first, you can get lost in the jungle focusing on a cool technology looking for a problem to solve. Then think about your data moat or the long term secret sauce of your business if successful and as you scale – too many folks are rushing to get an AI product out the door which can easily be copied with no long term defensibility. Finally, remember that the “perfect is the enemy of the good” which means you should ship product and iterate as fast as possible. Also think about how you use AI internally for development, sales, customer support, and outbound to build as lean a company as possible with as little venture money as possible 😄.

Who is your favorite mathematician or computer scientist, and why?

Hands down Claude Shannon as without him we wouldn’t have Information Theory which is the concept of encoding and transmitting information efficiently. This theory underpins how data is processed, stored, and transmitted, which is critical for pretty much all technology today, especially AI systems. In addition he was one of the first to think about building machines that could think, developing an early chess playing program which showcased how machines could make decisions. Finally we wouldn’t have the freedom we have today with wireless communications without Claude Shannon!