Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 31, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 31, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 31, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 31, 2025

      Windows 11 version 25H2: Everything you need to know about Microsoft’s next OS release

      May 31, 2025

      Elden Ring Nightreign already has a duos Seamless Co-op mod from the creator of the beloved original, and it’ll be “expanded on in the future”

      May 31, 2025

      I love Elden Ring Nightreign’s weirdest boss — he bargains with you, heals you, and throws tantrums if you ruin his meditation

      May 31, 2025

      How to install SteamOS on ROG Ally and Legion Go Windows gaming handhelds

      May 31, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Oracle Fusion new Product Management Landing Page and AI (25B)

      May 31, 2025
      Recent

      Oracle Fusion new Product Management Landing Page and AI (25B)

      May 31, 2025

      Filament Is Now Running Natively on Mobile

      May 31, 2025

      How Remix is shaking things up

      May 30, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Windows 11 version 25H2: Everything you need to know about Microsoft’s next OS release

      May 31, 2025
      Recent

      Windows 11 version 25H2: Everything you need to know about Microsoft’s next OS release

      May 31, 2025

      Elden Ring Nightreign already has a duos Seamless Co-op mod from the creator of the beloved original, and it’ll be “expanded on in the future”

      May 31, 2025

      I love Elden Ring Nightreign’s weirdest boss — he bargains with you, heals you, and throws tantrums if you ruin his meditation

      May 31, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»Summarize meetings in 5 minutes with Python

    Summarize meetings in 5 minutes with Python

    February 21, 2025

    Summarize meetings in 5 minutes with Python

    Virtual meetings have become a cornerstone of modern work, but reviewing lengthy recordings can be time-consuming. In this tutorial you’ll learn how to automatically generate summaries of meeting recordings in less than 10 lines of code using AssemblyAI’s API.

    LLM-powered meeting summaries

    This tutorial shows how to use our dedicated AI summarization model. If you want to see how to generate meeting summaries with LLMs, see our related blog.

    Go to blog

    Getting Started

    First, get a free AssemblyAI API key here. The free offering comes with hundreds of hours of free speech-to-text and summarization.

    You’ll need to have Python installed on your system to follow along, so install it if you haven’t already. Then install AssemblyAI’s Python SDK, which will allow you to call the API from your Python code:

    pip install -U assemblyai
    

    Basic implementation

    It’s time to set up your summarization workflow in Python. First, import the AssemblyAI SDK and configure your API key. While we set the API key inline here for simplicity, you should store it securely in a configuration file or environment variable in production code, and never check it into source control.

    import assemblyai as aai
    aai.settings.api_key = "YOUR_API_KEY"
    

    Next, create a transcription configuration object like the one below. This TranscriptionConfig configuration tells AssemblyAI that you want to enable summarization when you submit a file for speech-to-text, and also specifies which model you want to use for the summarization as well as the type (or format) of summary you want to create:

    config = aai.TranscriptionConfig(
        summarization=True,
        summary_model=aai.SummarizationModel.informative,
        summary_type=aai.SummarizationType.bullets
    )
    

    Next, create a transcriber object, which will handle the transcription of audio files. Passing in the TranscriptionConfig defined above applies these settings to any transcription created by this transcriber.

    transcriber = aai.Transcriber(config=config)
    

    Now you can submit files for transcription using the transcribe method of the transcriber. You can input either a local filepath or a remote URL to a publicly-accessible audio or video file you want to transcribe:

    # example remote file
    transcript = transcriber.transcribe("https://storage.googleapis.com/aai-web-samples/ravenel_bridge.opus")
    # or you can use a local file
    # transcript = transcriber.transcribe("path/to/your/audio.mp3")
    

    You can then access the summary information through the summary attribute of the resulting transcript:

    print(transcript.summary)
    

    Here’s the output for the example file above:

    - The Arthur Ravenel Jr Bridge opened in 2005 and is the longest cable stayed bridge in the Western Hemisphere. The design features two diamond shaped towers that span the Cooper river and connect downtown Charleston with Mount Pleasant. The bicycle pedestrian paths provide unparalleled views of the harbor.
    

    Using Python’s requests library

    If you prefer to use the requests library instead of AssemblyAI’s SDK, you can make a POST request to the API endpoint directly. A POST request to our API for a transcription with summarization looks like this, using cURL:

    curl https://api.assemblyai.com/v2/transcript 
    --header "Authorization: <YOUR_API_KEY>" 
    --header "Content-Type: application/json" 
    --data '{
      "audio_url": "YOUR_AUDIO_URL",
      "summarization": true,
      "summary_model": "informative",
      "summary_type": "bullets"
    }'
    

    Here’s how to make such a request in Python using the requests library:

    import requests
    
    url = "https://api.assemblyai.com/v2/transcript"
    headers = {
        "Authorization": "<YOUR_API_KEY>",
        "Content-Type": "application/json"
    }
    data = {
        "audio_url": "YOUR_AUDIO_URL",
        "summarization": True,
        "summary_model": "informative", 
        "summary_type": "bullets"
    }
    
    response = requests.post(url, json=data, headers=headers)
    

    The response contains a transcript id that can be used to access the transcript once it is finished processing. You can use webhooks to get notified when the transcript is ready, or you can poll the API to check the status – here’s a complete example of submitting a file and then polling until the transcript is ready:

    import requests
    import os
    import time
    
    URL = "https://api.assemblyai.com/v2/transcript"
    HEADERS = {
        "Authorization": f"{os.getenv('ASSEMBLYAI_API_KEY')}",
        "Content-Type": "application/json"
    }
    DATA = {
        "audio_url": "https://storage.googleapis.com/aai-web-samples/ravenel_bridge.opus",
        "summarization": True,
        "summary_model": "informative", 
        "summary_type": "bullets"
    }
    
    # Submit the transcription request
    response = requests.post(URL, json=DATA, headers=HEADERS)
    
    if response.status_code == 200:
        transcript_id = response.json()['id']
        
        # Poll until completion
        while True:
            polling_endpoint = f"https://api.assemblyai.com/v2/transcript/{transcript_id}"
            polling_response = requests.get(polling_endpoint, headers=HEADERS)
            transcript = polling_response.json()
            status = transcript['status']
            
            if status == 'completed':
                print("Transcription completed!")
                print("Summary:", transcript.get('summary', ''))
                break
            elif status == 'error':
                print("Error:", transcript['error'])
                break
                
            print("Transcript processing ...")
            time.sleep(3)
            
    else:
        print(f"Error: {response.status_code}")
        print(response.text)
    

    Customizing summaries

    AssemblyAI offers different models and summary types to suit various use cases:

    Summary models

    • informative (default): Optimized for single-speaker content like presentations
    • conversational: Best for two-person dialogues like interviews
    • catchy: Designed for creating engaging media titles

    Summary types

    • bullets (default): Key points in bullet format
    • bullets_verbose: Comprehensive bullet-point summary
    • gist: Brief summary in a few words
    • headline: Single-sentence summary
    • paragraph: Paragraph-form summary

    Best practices and troubleshooting

    Choosing the right model and summary type is important for optimal results. For clear recordings with two speakers, use the conversational model. If you’re working with shorter content, the gist or headline types will serve you better, while longer recordings are best handled by bullets_verbose or paragraph formats.

    If you’re having trouble getting good results, first verify your audio quality meets these criteria:

    • Speakers’ voices are distinct and easily distinguishable
    • Background noise is minimized
    • Audio input is clear and well-recorded

    And finally, some technical considerations to keep in mind:

    • The Summarization model cannot run simultaneously with Auto Chapters
    • Summarization must be explicitly enabled in your configuration
    • Processing times may vary based on your audio length and complexity

    Next steps

    To learn more about how to use our API and the features it offers, check out our Docs, or check out our cookbooks repository to browse solutions for common use cases. Alternatively, check out our blog for tutorials and deep-dives on AI theory, like this Introduction to LLMs, or our YouTube channel for project tutorials and more, like this one on on building an AI voice agent in Python with DeepSeek R1:

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleGemini’s Deep Research, similar to Copilot’s Think Deeper, is now on mobile on both iOS and Android
    Next Article Summarize meetings with LLMs in 5 lines of Python code

    Related Posts

    Artificial Intelligence

    Markus Buehler receives 2025 Washington Award

    May 31, 2025
    Artificial Intelligence

    LWiAI Podcast #201 – GPT 4.5, Sonnet 3.7, Grok 3, Phi 4

    May 31, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Le notizie minori del mondo GNU/Linux e dintorni della settimana nr 13/2025

    Linux

    Automate chatbot for document and data retrieval using Agents and Knowledge Bases for Amazon Bedrock

    Development

    Can AI Enhance Our Thinking Instead of Replacing It?

    Development

    Hit by LockBit? The FBI is waiting to help you with over 7,000 decryption keys

    Development

    Highlights

    Learning Resources

    How WordPress Agencies Can Improve Site Building Efficiency

    May 6, 2025

    WordPress Agencies and freelancers frequently juggle multiple projects. For example, building two or more websites…

    Revolutionize trip planning with Amazon Bedrock and Amazon Location Service

    November 14, 2024

    GPT 4o’s image update unlocked a huge opportunity most people are ignoring

    March 30, 2025

    Dynamic Form Validation in Laravel with prohibited_if

    December 7, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.