- I tested every Lenovo laptop released at MWC - and these are the very best
- VMware ESXi gets critical patches for in-the-wild virtual machine escape attack
- The 3 biggest opportunities you'll regret ignoring in 2025
- How to generate random passwords from the Linux command line
- 5 easy Gemini settings tweaks to protect your privacy from AI
Startup Fractile tackles AI inference bottlenecks with new chip design

From academic to entrepreneur
Goodwin says the transition from academics to running a company has been pretty smooth. He points out that the academic world has a competitive aspect to it.
“You end up a founding team of one for a company that is your thesis effort. You need to ensure that you have a competitive edge and a different angle than other people in the field,” he says.
“The Ph.D is high-altitude training for running a company. As a founder, one of things I like about building a company is all the people that you have to find and persuade,” Goodwin adds. So far, he’s built a workforce of 32 “unbelievably brilliant colleagues.”
What does Fractile do?
There are two pieces to an AI model – training and inference. The training function prepares the LLM for inference, which is the creation of outputs, the response to queries, the generation of images, text, code, etc.
Goodwin explains that the traditional approach to inference is that the AI chip needs to refer back to the training model database, which is stored in DRAM, and could account for terabytes of memory that needs to move from storage to compute. And this process occurs every time the AI system, like ChatGPT, adds a new word to its output.
By fusing memory and inference processing on a single chip, “the benefits are a hundred-fold increase in effective bandwidth and much higher energy efficiency,” says Goodwin.