How a data processing problem at Lyft became the basis for Eventual

Spread the love

When End Founder Sammy Sidhu and J. Chia were working as a software engineer in the autonomous vehicle program of the Lift, they witnessed a briong data infrastructure problem – and it would only grow up with the rise of AI.

Self-driving cars produce 3D scans and photos to text and audio one ton of unstructured data. Lift engineers did not have any tools that could understand and process all those different types of data at the same time – and all in one place. These left engineers are in a long process with reliability problems

“We had all these bright PhDs, the bright people throughout the industry, working with autonomous vehicles, they were spending 80% of the time working in infrastructure instead of making their original application,” Sidhu, who told Techchen in a recent interview. “And most of the problems they faced in this problem were the surroundings of the data infrastructure.”

Sidhu and Chia helped to create an internal multimodal data processing equipment for the lift. When Sidhu began his journey to apply for another job, he found that interviewers asked him about the same data solution for their companies possible and the concept was born behind the end.

In the end a Python-Native Open Source Data Processing Engine was created, known as DAFT, designed to work quickly on various models from text to audio and video and more. Sidhu said that the SQL was in the table in the table in the table, the goal of transforming Draft into an unstructured data infrastructure.

The company was established in the early 2022 before the ChatGP was published and many people were aware of this data infrastructure gap. They launched the first open source version of the DAFT in 2022 and are ready to launch an enterprise product in the third quarter.

Sidhu said, “The explosion of ChatzPT, what we have seen is many more people who are then creating AI apps with different methods,” said Sidhu. “Then everyone started a sort of use of things like images and documents and videos in their applications and

The main concept behind the autonomous vehicle space is the building draft, there are many other industries that process multimodal data including robotics, retail technology and healthcare. The company now calculates Amazon, CloudCiches and AI, together, as customers.

Finally, two rounds of funds have been raised in eight months. The first was a $ 7.5 million seed round led by CRV. Recently, the company has raised a round of $ 20 million, led by Felisis with Mr.2 and CT from Microsoft.

This latest round is the final open source offer will go towards making a commercial product in addition to bulking that allows its customers to create AI applications from this processed data.

The general partner of Felisis, Astasia Myers Techchen, told the Techchen that he finally found out through a market mapping practice that was involved in the search of data infrastructure that would be able to support the growing number of multimodal AI models.

The Myers said that the first mover stood in space – which would probably be more crowded – and the founders dealt with this data processing problem for the first time. He added that the consequences are also solving the problem.

Multimodal AI industry is predicted to grow in A. 35% compound annual growth rate The Management Consulting Farm is between 2023 and 2028 according to the Marketsandmart.

“The annual data generation has increased by one thousand X in the last 20 years, and in the last two years, 90% of the world’s data was generated in the last 20 years and according to the IDC, extensive information remains unimaginable,” said the Myers. “DAFT text, images, videos and voyes are built around AI’s huge macro trend you need a multimodal-netting data processing engine” “

Leave a Reply

Your email address will not be published. Required fields are marked *