Nvidia challenger Cerebras is in talks with the Indian government on augmenting the country’s artificial intelligence (AI) compute infrastructure, the chipmaking startup’s founder said on December 16.
“I was in Delhi 10 days ago and met with government officials. India doesn’t have enough data centres and supercomputers. It’s a tremendous opportunity. I hope we can do great work in India,” Cerebras founder Andrew Feldman said at the CNBC-TV18 and Moneycontrol Global AI Conclave.
“We met many AI thinkers and leaders in India. Getting government pronouncements and signing pieces of paper is the start. The hard work is after that when you build data centres, we have the hard work ahead of us,” he added.
Cerebras’ claim to fame lies in its novel approach, which is opposite to the typical chip-making process of cutting a wafer of silicon to create chips. Instead, Cerebras uses the entire silicon wafer as a large chip for its AI supercomputers.
Feldman also said that Cerebras is planning to double the size of its technology team in Bengaluru both next year and the year after.
“We have 38 people in Bengaluru and will double next year and the year after. We do AI developer work in Bengaluru. We don’t divide work by region. We can find exceptional people to lead projects out of Bangalore, Toronto or California,” he said.
Earlier this year, Cerebras said that it has built the first of nine AI supercomputers in a partnership with Abu Dhabi, part of an effort to provide alternatives to systems using Nvidia technology.
Condor Galaxy 1, located in Santa Clara, California, is now up and running, according to Feldman. The supercomputer, which cost more than $100 million, is going to double in size “in the coming weeks,” he said. It will be followed by new systems in Austin and Asheville, North Carolina, in the first half of next year.
Abu Dhabi-based G42, a tech conglomerate with nine operating companies that include data centre and cloud service businesses, said that it plans to use the Cerebras systems to sell AI computing services to healthcare and energy companies.
Cerebras’ implementation of OpenAI chief scientist Andrei Karpathy’s nanoGPT is said to be the simplest and most compact code base to train and fine-tune GPT models. Whereas nanoGPT can train models in the 100M parameter range, gigaGPT trains models well over 100B parameters.