“Unprecedented.” By our estimates, this phrase was uttered no less than 2,500 times during HumanX, this year’s newest and most talked-about AI event.
Although the term may feel overused since the AI revolution took off more than two years ago, it remains fitting. Emotions like distrust, optimism, uncertainty, and confidence continue to coexist when it comes to AI. These sentiments—and the conversations they sparked—were front and center at HumanX, where technologists, CEOs, engineers, policy experts, journalists, researchers, and several Inkhouse clients came together to make sense of AI’s multipolarity.
Distrust
Distrust was a central theme. Stefan Weitz, co-founder and CEO of HumanX, put it bluntly: “Basically, we won’t see real progress unless trust and responsibility are built in from the beginning. Without trust, we’re just building a high-tech house of cards. If we don’t build trust into AI systems, we’re setting ourselves up for problems like regulatory backlash, consumer rejection, and system failures.”

Image provided by HumanX
Former Vice President Kamala Harris reinforced this, “If we don’t solve this, we risk losing a critical opportunity for America to lead in both innovation and global safety. So, let’s address this directly.” Lane Bess, CEO of Deep Instinct and Chairman of Blaize (both are Inkhouse clients), highlighted AI’s compromise of privacy: “Privacy has been thrown out the door with AI, ” adding, “You don’t have a lot of people focused on the policies around privacy…and transitioning them into the AI space right now,” identifying this gap as a major innovation opportunity.

Image provided by Kaileigh Higgins
Optimism

Image provided by HumanX
While acknowledging the AI boom’s “frothy” nature, Weitz remained optimistic. Drawing parallels to past bubbles that yielded lasting change, like automobiles and the internet, he suggested that even if many AI ventures fail, “a revolution is still possible.”
Inkhouse client Databricks‘ established role as a leading data platform, serving “over 10,000 organizations,” forms the bedrock of Co-founder and VP of Engineering Arsalan Tavakoli’s optimistic vision for AI. As he put it, “Our bread and butter has always been (being) a great data platform,” and this foundation naturally leads to the next step: developing “incredible AI applications.”
Furthermore, he emphasized Databricks’ unique position within the AI landscape. They posited that “very few organizations” possess the integrated capacity to manage “the data part, the governance part, the AI part… across clouds,” while collaborating with diverse model providers. They believe this unique positioning is “critical to the future of AI.”
Uncertainty
Discussions also centered on the evolving nature of AI. Craig Martell, Chief AI Officer at Inkhouse client Cohesity, emphasized, “Looking to the future, it’s important to think about AI as a set of capabilities… not something monolithic. AI is an ongoing set of experiments you’ll always have to do. There is no easy button or single box to buy.”
Christine Yen, CEO of Inkhouse client Honeycomb, highlighted the need for observability, stating, “We do observability, which means that we help engineering teams make sense of what happens when their software is not doing what they expect, which you can imagine in this world with AI. This is probably likely to start happening more than we’re used to.”

Image provided by Kaileigh Higgins.
She also raised critical questions: “Do we train our own models? Do we use a commercially available one? What kinds of tools do we give our teams to build on these rapidly evolving technologies, as well as what AI code authoring tools do incorporate?”
Matt Carroll, CEO of Inkhouse client Immuta, stressed, “We’ve moved from securing applications to securing data,” and detailed the key considerations. “How do we separate policy from platform? How do we ensure effective auditing? And how do we distinguish between negligence and gross negligence?”
Confidence
Jason Warner, CEO of Inkhouse client poolside, expressed strong confidence in AI’s transformative power, particularly in software development. As a frontier AI company building foundational models, poolside aims to empower developers. Warner believes that intelligence, a commodity he likens to electricity, is a key driver of societal change and that AI is “basically changing and bending the arc of humanity in a way that we’ve not done before.”

Image provided by Kaileigh Higgins.
In an interview with Bloomberg’s Caroline Hyde, he also stated, “We’re going to give [everyone] the superpower to write software in the future. And for anyone who currently knows, it’ll feel like being augmented by 100x.” This vision underscores his conviction that AI will fundamentally reshape how software is created, and that intelligence, as a resource, is a critical component of that future.
We’re undoubtedly in uncharted territory, grappling with the same thought: How do we build something incredible without losing our way? HumanX wasn’t just AI hype, but about coming together to answer the biggest question of our lifetime: How do we make this work for everyone?
We’re not just building machines; we’re shaping the future in ways we still can’t fully grasp. HumanX felt like a crucial checkpoint, a moment to pause and ask: Where do we go from here in this unprecedented landscape?
There’s that word again…
There are no simple answers, but if this past week underscored anything, we must keep asking these hard questions as we push the boundaries of innovation.
Finally, a simpler takeaway from HumanX is that all conferences need a dog park.
