We are thrilled to release the AssemblyAI .NET SDK, making it easier to use the latest Speech AI models from AssemblyAI with .NET languages like C#, VB.NET, and F#. Use the SDK to transcribe audio, analyze audio using our audio intelligence models, and apply LLMs to your audio data using LeMUR.
We set out to build the SDK with the following goals:
Make it as intuitive as possible to use all the models and features that AssemblyAI offers using idiomatic C#.
Support as many frameworks as possible to not leave anyone out in the cold supporting older applications. Hence, we support .NET 6.0, .NET Framework 4.6.2, and .NET Standard 2.0 and up.
Keep dependencies at a minimum to avoid dependency version conflicts and the need for binding redirects.
Here are a couple of examples showcasing the .NET SDK.
1. Transcribe an audio file
using AssemblyAI;
using AssemblyAI.Transcripts;
var client = new AssemblyAIClient(“YOUR_API_KEY”);
var transcript = await client.Transcripts.TranscribeAsync(new TranscriptParams
{
AudioUrl = “https://storage.googleapis.com/aai-docs-samples/nbc.mp3”
});
transcript.EnsureStatusCompleted();
Console.WriteLine(transcript.Text);
You can also transcribe a local file, as shown here.
await using var stream = new FileStream(“./nbc.mp3”, FileMode.Open);
var transcript = await client.Transcripts.TranscribeAsync(
stream,
new TranscriptOptionalParams
{
LanguageCode = TranscriptLanguageCode.EnUs
}
);
transcript.EnsureStatusCompleted();
Console.WriteLine(transcript.Text);
Learn how to transcribe audio files by following the step-by-step instructions in our docs.
2. Transcribe audio in real-time using Streaming Speech-to-Text
using AssemblyAI.Realtime;
await using var transcriber = new RealtimeTranscriber(new RealtimeTranscriberOptions
{
ApiKey = “YOUR_API_KEY”,
SampleRate = 16_000
});
transcriber.PartialTranscriptReceived.Subscribe(transcript =>
{
Console.WriteLine($”Partial: {transcript.Text}”);
});
transcriber.FinalTranscriptReceived.Subscribe(transcript =>
{
Console.WriteLine($”Final: {transcript.Text}”);
});
await transcriber.ConnectAsync();
// Pseudocode for getting audio from a microphone for example
GetAudio(async (chunk) => await transcriber.SendAudioAsync(chunk));
await transcriber.CloseAsync();
Learn how to transcribe audio from the microphone by following the step-by-step instructions in our docs.
3. Use LeMUR to build LLM apps on voice data
var lemurTaskParams = new LemurTaskParams
{
Prompt = “Provide a brief summary of the transcript.”,
TranscriptIds = [transcript.Id],
FinalModel = LemurModel.AnthropicClaude3_5_Sonnet
};
var response = await client.Lemur.TaskAsync(lemurTaskParams);
Console.WriteLine(response.Response);
Learn how to use LLMs with audio data using LeMUR in our docs.
4. Use audio intelligence models
var transcript = await client.Transcripts.TranscribeAsync(new TranscriptParams
{
AudioUrl = “https://storage.googleapis.com/aai-docs-samples/nbc.mp3”,
SentimentAnalysis = true
});
foreach (var result in transcript.SentimentAnalysisResults!)
{
Console.WriteLine(result.Text);
Console.WriteLine(result.Sentiment); // POSITIVE, NEUTRAL, or NEGATIVE
Console.WriteLine(result.Confidence);
Console.WriteLine($”Timestamp: {result.Start} – {result.End}”);
}
Learn more about our audio intelligence models in our docs.
Get started with the C# .NET SDK
You can find installation instructions and more information in the README of the C# .NET SDK GitHub repository. File an issue or contact us with any feedback.
Source: Read MoreÂ