Additional reporting by Iain Martin and Richard Nieva.
Former xAI researcher Eric Zelikman is raising $1 billion for a new startup called Humans&, that will train AI models to be better at collaborating with humans, six sources told Forbes. The company is in talks for a $5 billion valuation, sources added.
Other founders include early Google employee Georges Harik, and researchers who worked at Meta, Anthropic, OpenAI and DeepMind, sources added.
Humans& declined to comment.
While it’s unclear what product the company plans to build, Humans& has told investors that it is working on a new way to train models that remember and react to a person’s preferences and interests, so that AI can empower humans, as opposed to other AI labs, which are focused on how well AI can replace humans.
“I personally strongly believe that we’re much more likely to solve a lot of these fundamental human problems by building models that are really good at collaborating with large groups of people, that are really good at understanding different people’s goals, different people’s ambitions, different people’s values, understanding different people’s weaknesses and how to coordinate with these large groups of people to make everyone more effective,” Zelikman told investor Sarah Guo on the No Priors podcast earlier in October.
The company has told investors that its new training paradigm will require more compute than current AI training strategies, one source said.
Fewer startups are attempting new ways of training models, given the enormous amount of cash and compute needed to build these powerful systems, instead opting to build applications on top of frontier AI models like GPT-5, Claude Opus 4.1 and Grok 4. If the fundraise closes, Humans& would join a handful of labs that have raised significant sums prior to releasing any product, such as Mira Murati’s Thinking Machine Labs and Ilya Sutskever’s Safe Superintelligence.
One investor who passed on the round said the round size was “too big of a number,” for an early-stage company.
Work at xAI and got a tip for us? Contact reporters Anna Tong at atong@forbes.com, Rashi Shrivastava at rshrivastava@forbes.com or rashi.17 on Signal and Iain Martin at iain.martin@forbes.com or 646-739-6427.
Zelikman, who was a PhD candidate in computer science at Stanford, wrote “the first paper to train language models to reason in natural language,” according to his website. OpenAI’s breakthrough “o” series is an example of these types of reasoning models. He then worked on collecting pretraining data (data used in the initial phase of training models) reasoning and agent infrastructure at xAI.
Georges Harik was the 7th employee at Google, a co-creator of Adwords and Adsense, and is a startup investor, according to his LinkedIn.
Zelikman’s PhD advisor Noah Goodman, a professor of computer science and psychology at Stanford, is also a founder. Goodman worked on the post-training team for Gemini, according to his LinkedIn.
A fourth founder, Andi Peng, worked in post-training and behavioral reinforcement learning at Anthropic, per LinkedIn.
Another founding team member, Ray Ramadorai, worked on the system design for Microsoft’s large data centers, per LinkedIn.
“If we invest everything in autonomy and nothing in collaboration, all IQ and no EQ, the future will be a colder place,” Zelikman wrote on his website.
Read the full article here












