Articles - Page 3

29-May-2025
Test results for local running LLMs using Ollama AMD Ryzen 7 8745HS
Local LLMs Benchmark data on GPU: AMD Ryzen 7 8745HS 16/64 Gb RAM

26-May-2025
Hardware Options for Running Local LLMs 2025
Explore the optimal hardware for running large language models (LLMs) locally, from entry-level edge devices like NVIDIA Jetson Orin Nano Super to powerful GPUs, Apple M3 Ultra Mac Studio, and modern CPUs like AMD Ryzen 7 8745HS. Learn how to choose the right setup for efficient AI performance, balancing cost, speed, and scalability.

25-May-2025
How benchmark local LLMs
A practical benchmarking local LLMs using Ollama: covers script automation, hardware detection, performance metrics

24-May-2025
Test results for local running LLMs using Ollama on RTX4060 Ti
Local LLMs Benchmark data sorted by evaluation performance