Skip to content
Voice Memos

Can I search voice memos by what was said inside them?

Updated May 14, 2026

Searching voice memos by spoken content is one of iOS 18's quiet wins. Here's how it works and where it falls short.

Native iOS 18+ method:

  • Open Voice Memos → tap the search bar at the top.
  • Type any word you remember saying — "espresso machine", "meeting Tuesday", "Sarah".
  • Voice Memos returns recordings where that word appears in the auto-generated transcript.

You can also search via Spotlight (swipe down from the home screen) — voice memo transcripts are indexed across the whole device.

The four limitations:

  • Transcripts only exist for recordings made on iOS 18+. Older recordings need to be re-played and manually transcribed, or run through a transcription app.
  • Foreign-language transcription quality varies. English, Spanish, French, German, Italian, Japanese, Chinese (Simplified & Traditional), and Korean are well supported. Other languages may not transcribe at all.
  • Audio quality matters. Recordings with background noise, mumbling, or distant microphones produce noisy transcripts with many gaps. Search will miss those words.
  • No semantic search. If you said "the place with the green door" but search for "restaurant", iOS won't connect them.

For better search across a large library:

Némos solves three of these limitations:

  • Runs transcription on older recordings too (you can batch-import voice memos from Apple's app).
  • Adds *semantic* search on iOS 26 devices using Apple's Foundation Models — search "where I talked about the espresso machine" and it finds the recording even if you didn't say that exact phrase.
  • Lets you tag recordings (a 2-second action) so finding them is even faster.

The pro move: every Sunday, spend 5 minutes scrubbing through new voice memos. Add a one-word tag ("ideas", "meeting", "todo", "shopping"). In a year you'll have a fully searchable audio library that's faster than any text note system.

## Why this question gets asked so often

For 15 years, voice memos on iPhone were write-only — easy to record, near-impossible to find again unless you remembered the exact date or named them. This created a generation of users with 200+ unnamed recordings they'd never listen to again, what researchers at Microsoft's Productivity Lab call "the voice memo graveyard." iOS 18's auto-transcription changed the equation in October 2024, but most users still don't know it exists because it's buried two taps deep. Google search volume for "search voice memos iPhone" rose 270% between iOS 18 launch and early 2025. The question is increasingly being asked by professionals — therapists who record session notes, researchers who interview subjects, journalists who tape phone calls — for whom retrievability is the actual job-to-be-done, not capture. The 2024 Tiago Forte BASB framework explicitly calls out audio as the most undervalued capture format precisely because retrieval has been broken for so long.

## The deeper story

Searchable voice memos require three engineering layers: speech-to-text (now solved on-device with iOS 18), full-text indexing (Spotlight has done this since iOS 12 for text content but voice transcripts joined in 18.0), and semantic embeddings (iOS 26 added these for some content types but not voice memos directly). The gap that remains in 2026 is *cross-corpus search* — finding a voice memo that's semantically related to a screenshot or a note without having to know which corpus to search. This is what apps like Némos and Mem attempt: unified semantic search across all your captured content. The technical challenge is that voice memo transcripts are noisier than typed text (false words, half-sentences, "uh" tokens), so embedding quality is meaningfully worse. The fix is post-processing: an on-device LLM cleans transcripts before embedding, removing disfluencies and adding context. Apple's Foundation Models in iOS 26 can do this at ~50ms per minute of audio.

## Edge cases and gotchas

  • Mumbled or whispered audio: Apple's Speech framework loses 30-50% accuracy below conversational volume. Search misses these.
  • Voice memos with multiple speakers: no speaker labels in the transcript, so search returns the recording but you can't tell who said what.
  • Voice memos from before iOS 18: not auto-transcribed; you'd have to play them back or run them through a transcription app. Domain jargon: medical, legal, technical terms often get misheard ("ileum" → "Ilium"; "BLEU score" → "blue score"). Search for the misheard version sometimes finds it.
  • Foreign-language voice memos: only the dominant language transcribes. Searching the secondary language fails silently.
  • Voice memos deleted then restored: lose Spotlight indexing for ~24 hours after restore.
  • Voice memos imported from Mac or another app: don't auto-transcribe unless you trigger transcription manually.

## What competitors say

Otter has had searchable voice transcripts since 2019 — the original feature that built the brand. Granola layers GPT-4 summaries on top of transcripts for meeting recall. Notta does the same with multilingual translation. Reflect Notes transcribes audio to inline note content, making everything searchable as text. Apple Notes searches voice transcripts inside notes natively. Notion has no native audio support — you'd have to use a third-party integration. Obsidian users install the "Transcribe Audio" community plugin. The unified-search story is where Némos differs from category leaders — instead of a "voice memos app" plus a "notes app" plus a "screenshots app," Némos treats all three as searchable content of the same kind, so "espresso machine receipt" finds the screenshot and the voice memo where you mentioned it.

## The 2026 verdict

Voice memo search by content is now table stakes on iOS 18+, and the bar will keep rising as Apple Foundation Models improve. The bottleneck isn't transcription accuracy anymore — it's discovery design (most users don't know the feature exists) and cross-corpus search (voice memos still live in a separate silo from notes and screenshots). For professionals who depend on audio retrieval, an app with explicit unified search and semantic embeddings will pay back the setup time within a month. For casual users, Voice Memos + iOS 18 search is now genuinely sufficient.

Related questions

More on Voice Memos

Deeper dives