Box CEO Aaron Levie on AI’s ‘era of context’

Spread the love

On Thursday, the box announced a new set of AI features, the agent created the AI ​​models on the backbone of the company’s products and launched its developer conference box works.

It declares more products than usual for conferences, the agency AI reflects the growing speed of development: box launched its AI studio last year, followed by a new set of data-exciting agents In FebruaryAnd others for searching and deep research In MayThe

Now, the company is rolling a new system The box is automated It serves as a type of operating system for AI agents, breaking the workflow in different sections that can be extended with AI as necessary.

I have talked to Aaron Levy about the company’s views on AI and the dangerous task of competing with the foundation model companies. Surprisingly, he was very bullshit about the possibilities of AI agents in the modern workplace, but he was also clearly visible about the limitation of current models and how to handle these limitations with existing technologies.

This interview has been edited for length and precision.

TechCrunch: You are announcing a bunch of AI products today, so I would like to start by asking about big-painted vision. Why do you create AI agents in cloud content-management services?

Aaron Levi: So the thing we think about all day – and what our focus is in the box – how much work has changed because of AI. And most of the impact now is in the workflow involved in continuous data. We have already been able to automate something that works with a structural data going to a database. If you think about the CRM system, ERP systems, HR systems, we have already got a few years of automation at that place. However, where we have never automation are something that touches the structural data.

TechCrunch event

San Francisco
|
October 27-29, 2025

Think about any kind of legal review process, any kind of marketing resource management process, any kind of M&A deal review – all these workflows work with a lot of endless data. People have to review that data, update it, make decisions and more. We have never been able to bring too much automation to those workflows. We have been able to describe their software, but computers are not good enough to read a document or look at marketing property.

So for us, AI agents mean that for the first time we can actually tap all these structural data.

TC: What about the risk of deploying agents in the context of business? Some of your customers must be nervous about the deployment of something like this in sensitive data.

Levi: What we see from customers is that they want to know that every time they run the workflow, the agent is about to be less effective, at the same time of workflow, and things do not go on the type of rail. If you do not want an agent to make some compound mistakes where they submit 100 to the first couple, they begin to run a kind of wild.

Having the right boundary points becomes really important, where the agent begins and the other parts of the system end. For each workflow, the question is that the detergently maintenance is what is needed and the key can be full agent and non-determined.

What you can do with the box automate is to decide how much you want to do before giving each individual agent to another agent. So you may have a submitted agent that is different from the review agent, and more. This is basically allowing you to deploy scale AI agents in any kind of workflow or business process.

Box Automate Workflow is a visualization
A box has been deployed for specific tasks with AI agents. Figure Credit: Box

TC: What kinds of problems do you protect by dividing work flow?

Levi: We have already seen some restrictions on the most advanced full agent system like Claud Code. At one point in the work, the model moves out of the context-Windo room to continue to make good decisions. There is no free lunch in AI right now. After any work in your business you may not have a long-standing agent with unlimited context window. So you have to break the workflow and use sub-agents.

I think we are in the era of context in AI. All that AI models and agents need is the context, and the context they need to work sits inside your structural data. So our entire system is really designed to determine what context you can give to the AI ​​agent as effective as possible.

TC: There is a greater debate in the industry about the benefits of strong, strong border models than smaller and more reliable models. Does it put you on the side of the small model?

Levi: I should probably clarify: Nothing about our system does not prevent the task from being arbitrarily long or complicated. What we are trying to do is to make the right maintenance so that you can decide how much agent you want to be.

We have no special visit to where people should be in that series. We are simply trying to design the future proof architecture. We have designed it in such a way where the models are developed and the agent’s ability to improve, you will only get all these benefits on our platform.

TC: Another concern is data control. Since models are trained in so much data, there is a real fear that sensitive data will become restructured or abused. How is that factor?

Levi: This is where a lot of AI deployment goes wrong. People think, “Hey, it’s easy. I will access an AI model on all my structural data, and it will answer the questions for people.” And then it starts to answer you in data that you don’t have access or you should not have access. You need a very strong level that manages access control, data protection, permission, data governance, consent, everything.

So we have benefited from decades to create a system that basically manages the right problem: How do you make sure that there is only one piece of data access to the right person’s enterprise? So when an agent answers a question, you know prescribed that it cannot draw any data that no person should have access. This is something fundamentally built on our system.

TC: At the beginning of this week, anthropologists have published a new feature for uploading files directly to Claud.Ae. This is a long way than the type of file that operates the box, but you must think about the possible competition from the foundation model companies. How do you contact strategically?

Levi: So if you think about what initiatives are needed when deploying AI scale AI, they need protection, permission and control. They need user interfaces, they need strong APIs, they want their AI models, because one day, an AI model uses something for them that uses something that is better than the other, but they do not want to lock on a specific platform.

So what we made is a system that allows you to do all those powers effectively. We’re storage, protection, permissions, embedding vector and we are connecting to each of the top AI models that are there.

Leave a Reply

Your email address will not be published. Required fields are marked *