- This Ecovacs robot vacuum and mop is a sleeper hit, and it handles carpeting like a champ
- I tested DJI's latest flagship drone, and it's straight from the future (with one caveat)
- INE Security Alert: Top 5 Takeaways from RSAC 2025
- This portable Sony speaker replaced my Bose SoundLink Max - while costing hundreds less
- AWS ofrece un adelanto de lo que los CIO pueden hacer con Amazon Q Business
Lenovo introduces entry-level, liquid cooled AI edge server

Lenovo has announced the ThinkEdge SE100, an entry-level AI inferencing server, designed to make edge AI affordable for enterprises as well as small and medium-sized businesses.
AI systems are not normally associated with being small and compact; they’re big, decked out servers with lots of memory, GPUs, and CPUs. But the server is for inferencing, which is the less compute intensive portion of AI processing, Lenovo stated. GPUs are considered overkill for inferencing and there are multiple startups making small PC cards with inferencing chip on them instead of the more power-hungry CPU and GPU.
This design brings AI to the data rather than the other way around. Instead of sending the data to the cloud or data center to be processed, edge computing uses devices located at the data source, reducing latency and the amount of data being sent up to the cloud for processing, Lenovo stated.