Memories.ai, a young infrastructure startup founded by former Meta engineers, is trying to solve a problem that has quietly limited much of today’s physical AI: machines can see, but they rarely remember. The company says it is building a large visual memory model, or LVMM, that turns continuous video into a searchable memory layer for wearables, robots, and other camera-equipped systems.
The idea is simple to describe but ambitious in scope. Instead of treating video as a stream of disposable frames, Memories.ai wants devices to store and index what they have seen so that those experiences can be queried later. In practice, that could mean asking a wearable where a user last saw a red toolbox, or enabling a warehouse robot to compare a loading area’s current state with how it looked the night before.
The startup is positioning itself less as a hardware company and more as a foundational software and infrastructure layer for what it calls physical AI. That includes smart glasses, AI pins, warehouse machines, inspection robots, and potentially any device that moves through the real world with a camera attached. According to the company, its system ingests video, converts scenes and events into embeddings, and stores them in a form that can later be searched using text queries or visual references.
That framing matters because many current AI devices are still largely stateless. They can identify objects, respond to prompts, and interpret a live scene, but once a frame is processed, the moment is usually gone. Memories.ai is effectively arguing that the next leap for wearables and robotics will come from giving these systems persistent memory, not just real-time perception.
The company used Nvidia’s GTC 2026 conference to announce a collaboration that places its technology on top of Nvidia infrastructure. Memories.ai says its stack uses Nvidia Cosmos-Reason 2 along with Nvidia Metropolis tools for video search and summarization, helping power the indexing and retrieval of large video streams.
That partnership gives the startup something every young infrastructure company needs: credibility. It also places Memories.ai inside a broader industry push to build software layers for AI systems that operate in the physical world, where memory, retrieval speed, and edge performance may matter just as much as raw model quality. The company has also said its LVMM 2.0 will support Qualcomm processors, opening the door to more on-device deployments in phones, cameras, and wearables starting in 2026.
Memories.ai was founded by CEO Shawn Shen and CTO Ben Zhou, who previously worked at Meta on the AI system behind Ray-Ban smart glasses. Their takeaway from that work was that recording the world is only half the challenge. The harder problem is making all that captured video useful later.
The founders spun the company out in 2024 and have since raised about $16 million, including an $8 million seed round and an $8 million extension led by Susa Ventures, with participation from Camp Fund, Crane Venture, and other investors.
To support data collection, the startup built an internal wearable camera called LUCI, a small clip-on recorder used by paid data collectors to capture first-person video. Shen has said the device is not intended as a consumer product, but rather as a practical solution to the limitations of existing hardware, particularly battery life and all-day usability.
Still, the hardware is useful as a signal of how Memories.ai imagines its software being used. At CES 2026, LUCI was presented as an AI pin concept for developers, suggesting a future where third-party applications could build on top of a persistent visual memory layer instead of creating their own video indexing systems from scratch.
The broader pitch is that future AI devices should not just react to the present moment. They should be able to reason over a timeline of past experience. For consumer wearables, that could translate into object recall, daily summaries, or contextual assistance tied to what a user has actually seen. For industrial robotics, it could mean better anomaly detection, shift-to-shift comparison, and more reliable awareness of changing environments.
That vision also hints at why larger hardware players may be interested. Memories.ai has said it is already working with several large wearable manufacturers, though it has not publicly named them. If the company succeeds, it could become one of the quiet infrastructure providers behind the next wave of AI glasses, pins, and autonomous machines.
The promise of searchable visual memory is easy to grasp. So are the privacy questions. A system designed to continuously record, embed, store, and retrieve visual experience could be useful in homes, warehouses, and public spaces, but it also raises obvious concerns around consent, surveillance, retention policies, and who gets access to the archive. Those questions may become just as important as the technology itself if visual memory shifts from demo to deployment.
For now, Memories.ai is making an early bet on where physical AI is headed. The company’s argument is that smart devices will need something closer to memory than perception alone. In a market crowded with assistants that can look at the world, Memories.ai is trying to build the layer that lets them remember it.
Be the first to post comment!
ByteDance has reportedly halted the global rollout of its ad...
by Vivek Gupta | 1 day ago
China’s ByteDance, the company behind TikTok, is preparing f...
by Vivek Gupta | 4 days ago
Artificial intelligence has reached a strange and wonderful...
by Vivek Gupta | 4 days ago
Amazon has introduced its Health AI assistant on Amazon.com...
by Vivek Gupta | 6 days ago
Trying to download something from Instagram sometimes feels...
by Vivek Gupta | 1 week ago
The 2026 conflict involving Iran has revealed a new reality...
by Vivek Gupta | 1 week ago