.Make sure compatibility along with various platforms, including.NET 6.0,. Internet Structure 4.6.2, and.NET Standard 2.0 and also above.Lessen dependences to prevent version problems as well as the necessity for tiing redirects.Translating Audio Info.One of the main functions of the SDK is actually audio transcription. Designers can transcribe audio reports asynchronously or in real-time. Below is actually an instance of how to translate an audio data:.making use of AssemblyAI.utilizing AssemblyAI.Transcripts.var client = brand-new AssemblyAIClient(" YOUR_API_KEY").var records = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For local area documents, identical code may be made use of to achieve transcription.await using var flow = new FileStream("./ nbc.mp3", FileMode.Open).var transcript = wait for client.Transcripts.TranscribeAsync(.stream,.brand new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK also sustains real-time audio transcription utilizing Streaming Speech-to-Text. This function is actually specifically helpful for applications needing prompt handling of audio information.making use of AssemblyAI.Realtime.wait for utilizing var scribe = brand-new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Final: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for getting sound coming from a microphone as an example.GetAudio( async (part) => wait for transcriber.SendAudioAsync( chunk)).wait for transcriber.CloseAsync().Utilizing LeMUR for LLM Functions.The SDK combines along with LeMUR to permit developers to construct large foreign language model (LLM) apps on vocal information. Below is actually an example:.var lemurTaskParams = brand-new LemurTaskParams.Cause="Provide a quick summary of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var feedback = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Knowledge Designs.Additionally, the SDK possesses built-in assistance for audio knowledge designs, allowing conviction review as well as other enhanced features.var transcript = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = accurate. ).foreach (var result in transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// GOOD, NEUTRAL, or downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To read more, see the main AssemblyAI blog.Image resource: Shutterstock.