Audio doesn’t have frames, it has sample rate. Functionally, this is the same thing though: for the image in film, a camera takes a certain number of images per second (say, 24). For audio, the typical film sample rate is 48,000 samples per second (think “slices” of sound, just like how each frame of a movie is a single image that blends into motion). This is referred to as “48 kHz” for short (48,000 hertz, or 48 kilohertz).
If you’re referring to my previous post in this thread where I mention sample rate differences, think the difference between TV and film (TV is “conventionally” 30 fps, while film is “conventionally” 24 fps), and mixing the two can cause weird compatibility problems at times. Same goes for audio. For whatever reason, film audio standardized to 48 kHz, while music (starting with CDs) standardized to 44.1 kHz.
Video editing programs will typically limit your audio editing to frames by default, but this is a function / limitation of the video editing software only - it doesn’t actually reflect how audio “works.” I edit in Pro Tools, which is an audio-specific program, and I have the option to toggle between freely placing audio (basically on a 48 kHz scale), or I can snap to a “grid,” which lets me nudge audio frame-by-frame, like in a video program. That’s useful when compensating for say, a change in the picture edit (so you want to move everything in frames), but compared to how audio actually works, it’s a fairly crude standard.