Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 2, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 2, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 2, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 2, 2025

      How Red Hat just quietly, radically transformed enterprise server Linux

      June 2, 2025

      OpenAI wants ChatGPT to be your ‘super assistant’ – what that means

      June 2, 2025

      The best Linux VPNs of 2025: Expert tested and reviewed

      June 2, 2025

      One of my favorite gaming PCs is 60% off right now

      June 2, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      `document.currentScript` is more useful than I thought.

      June 2, 2025
      Recent

      `document.currentScript` is more useful than I thought.

      June 2, 2025

      Adobe Sensei and GenAI in Practice for Enterprise CMS

      June 2, 2025

      Over The Air Updates for React Native Apps

      June 2, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      You can now open ChatGPT on Windows 11 with Win+C (if you change the Settings)

      June 2, 2025
      Recent

      You can now open ChatGPT on Windows 11 with Win+C (if you change the Settings)

      June 2, 2025

      Microsoft says Copilot can use location to change Outlook’s UI on Android

      June 2, 2025

      TempoMail — Command Line Temporary Email in Linux

      June 2, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Operating Systems»Linux»I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn’t 200 tokens/s

    I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn’t 200 tokens/s

    January 27, 2025

    I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn't 200 tokens/s

    Since the launch of DeepSeek AI, every tech media outlet has been losing its mind over it. It’s been shattering records, breaking benchmarks, and becoming the go-to name in AI innovation.

    I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn't 200 tokens/s
    DeepSeek v/s OpenAI benchmark | Source: Brian Roemmele

    Recently, I stumbled upon a post on my X feed (don’t judge me, I’m moving to Bluesky soon!) where someone claimed to have run Deepseek on a Raspberry Pi at 200 tokens/second.

    I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn't 200 tokens/s

    My head started spinning. “wHaaaTTT?!“

    Naturally, I doom-scrolled the entire thread to make sense of it. Turns out, the guy used an AI accelerator module on top of the Pi to hit those numbers.

    I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn't 200 tokens/s

    But curiosity is a powerful motivator. Since I didn’t have an AI module lying around, I thought, why not test the raw performance of Deepseek on a plain Raspberry Pi 5? Who’s stopping me?

    So, for this article, I installed Ollama on my Pi 5 (8 GB model) and downloaded Deepseek model with different parameters(i.e. 1.5B, 7B, 8B, and 14B parameters to be specific).

    💡
    If you’re new or unsure about setting things up, don’t worry, we already have a detailed guide on installing Ollama on a Raspberry Pi to help you get started.

    Here’s how each one performed:

    Deepseek 1.5B

    I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn't 200 tokens/s

    This model was snappy. It felt surprisingly responsive and handled paraphrasing tasks with ease. I didn’t encounter any hallucinations, making it a solid choice for day-to-day tasks like summarization and text generation.

    Performance stats

    To test its capability further, I posed the question: What’s the difference between Podman and Docker?

    The model gave a decent enough answer, clearly breaking down the differences between the two containerization tools.

    It highlighted how Podman is daemonless, while Docker relies on a daemon, and touched on security aspects like rootless operation.

    This response took about two minutes, and here’s how the performance data stacked up:

    total duration:       1m33.59302487s
    load duration:        44.322672ms
    prompt eval count:    13 token(s)
    prompt eval duration: 985ms
    prompt eval rate:     13.20 tokens/s
    eval count:           855 token(s)
    eval duration:        1m32.562s
    eval rate:            9.24 tokens/s

    Deepseek 7B

    I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn't 200 tokens/s

    The 7B model introduced a fair amount of hallucination. I tried writing a creative prompt asking for three haikus, but it started generating endless text, even asking itself questions!

    While amusing, it wasn’t exactly practical. For benchmarking purposes, I simplified my prompts, as seen in the video. Performance-wise, it was slower, but still functional.

    Performance stats

    To test it further, I asked: What’s the difference between Docker Compose and Docker Run? The response was a blend of accurate and imprecise information.

    It correctly explained that Docker Compose is used to manage multi-container applications via a docker-compose.yml file, while Docker Run is typically for running single containers with specific flags.

    However, it soon spiraled into asking itself questions like, “But for a single app, say a simple Flask app on a single machine, Docker Run might be sufficient? Or is there another command or method?”

    Here’s how the performance data turned out:

    total duration:       4m20.665430872s
    load duration:        39.565944ms
    prompt eval count:    11 token(s)
    prompt eval duration: 3.256s
    prompt eval rate:     3.38 tokens/s
    eval count:           517 token(s)
    eval duration:        4m17.368s
    eval rate:            2.01 tokens/s

    Deepseek 8B

    I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn't 200 tokens/s

    This was the wild card. I didn’t expect the 8B model to run at all, considering how resource-hungry these models are. To my surprise, it worked!

    The performance was on par with the 7B model, neither fast nor particularly responsive, but hey, running an 8B model on a Raspberry Pi without extra hardware is a win in my book.

    Performance stats

    I tested it by asking, “Write an HTML boilerplate and CSS boilerplate.” The model successfully generated a functional HTML and CSS boilerplate in a single code block, ensuring they were neatly paired.

    However, before jumping into the solution, the model explained its approach, what it was going to do and what else could be added.

    While this was informative, it felt unnecessary for a straightforward query. If I had crafted the prompt more precisely, the response might have been more direct (i.e. user error).

    Here’s the performance breakdown:

    total duration:       6m53.350371838s
    load duration:        44.410437ms
    prompt eval count:    13 token(s)
    prompt eval duration: 4.99s
    prompt eval rate:     2.61 tokens/s
    eval count:           826 token(s)
    eval duration:        6m48.314s
    eval rate:            2.02 tokens/s

    Deepseek 14B ?

    I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn't 200 tokens/s

    Unfortunately, this didn’t work. The 14B model required over 10 GB of RAM, which my 8 GB Pi couldn’t handle. After the success of the 8B model, my hopes were high, but alas, reality struck.

    Conclusion

    DeepSeek’s raw performance on the Raspberry Pi 5 showcases the growing potential of SBCs for AI workloads.

    The 1.5B model is a practical option for lightweight tasks, while the 7B and 8B models demonstrate the Pi’s ability to handle larger workloads, albeit slowly.

    I’m excited to test DeepSeek on the ArmSoM AIM7 with its 6 TOPS NPU. Its RK3588 SoC could unlock even better performance, and I’ll cover those results in a future article.

    If you’re interested in more of my experiments, check out this article where I ran 9 popular LLMs on the Raspberry Pi 5.

    Until then, happy tinkering, and remember: don’t ask AI to write haikus unless you want a never-ending saga. 😉

    Source: Read More

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleVeteran PM Kim Vorrath reportedly joins Apple’s new AI division to whip “AI and Siri into shape” — amid claims Apple Intelligence is 2 years behind OpenAI’s ChatGPT and Microsoft’s Copilot
    Next Article Best Free and Open Source Alternatives to Apple Passwords

    Related Posts

    Operating Systems

    You can now open ChatGPT on Windows 11 with Win+C (if you change the Settings)

    June 2, 2025
    Operating Systems

    Microsoft says Copilot can use location to change Outlook’s UI on Android

    June 2, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Remote Device Management and Wiping Policy

    Development

    Microsoft Copilot’s next big upgrade takes on NotebookLM and could save you hours of research time

    News & Updates

    Big Node, VS Code, and Mantine updates

    Development

    Data of 93,000 Volunteers of India’s NDMA Allegedly Put Up for Sale

    Development

    Highlights

    Development

    Will we care about frameworks in an AI world?

    November 15, 2024

    #​713 — November 14, 2024 Read on the Web JavaScript Weekly Component Party: A Rosetta…

    Understanding AI Agent Memory: Building Blocks for Intelligent Systems

    March 30, 2025

    Grab this Dyson air purifier and fan combo for $130 off now

    May 19, 2025

    Fighting osteoporosis before it starts

    May 13, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.