Siarhei Berdachuk
ArticlesAbout

Articles

5/29/2025

Test results for local running LLMs using Ollama AMD Ryzen 7 8745HS

Local LLMs Benchmark data on GPU: AMD Ryzen 7 8745HS 16/64 Gb RAM

5/27/2025

How to Run LLMs Locally: A Complete Step-by-Step Guide

Unlock the power of AI on your own hardware with this comprehensive guide to running large language models (LLMs) locally. Learn about privacy, hardware requirements, quantization, and practical tools like LM Studio and Ollama for private, cost-effective, and customizable AI.

5/26/2025

Hardware Options for Running Local LLMs 2025

Explore the optimal hardware for running large language models (LLMs) locally, from entry-level edge devices like NVIDIA Jetson Orin Nano Super to powerful GPUs, Apple M3 Ultra Mac Studio, and modern CPUs like AMD Ryzen 7 8745HS. Learn how to choose the right setup for efficient AI performance, balancing cost, speed, and scalability.

5/25/2025

How benchmark local LLMs

A practical benchmarking local LLMs using Ollama: covers script automation, hardware detection, performance metrics

5/24/2025

Test results for local running LLMs using Ollama on RTX4060 Ti

Local LLMs Benchmark data sorted by evaluation performance

12/20/2024

LLM Learning Roadmap for software developers

Master LLMs with a step-by-step roadmap: Spring AI, Ollama, RAG, prompt engineering, agents, and fine-tuning for scalable AI apps

© 2024..2025 Siarhei Berdachuk. All rights reserved.