init.lua 3.1 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
  1. return {
  2. summary = 'An object that holds raw audio samples.',
  3. description = [[
  4. A Sound stores the data for a sound. The supported sound formats are OGG, WAV, and MP3. Sounds
  5. cannot be played directly. Instead, there are `Source` objects in `lovr.audio` that are used
  6. for audio playback. All Source objects are backed by one of these Sounds, and multiple Sources
  7. can share a single Sound to reduce memory usage.
  8. Metadata
  9. ---
  10. Sounds hold a fixed number of frames. Each frame contains one audio sample for each channel.
  11. The `SampleFormat` of the Sound is the data type used for each sample (floating point, integer,
  12. etc.). The Sound has a `ChannelLayout`, representing the number of audio channels and how they
  13. map to speakers (mono, stereo, etc.). The sample rate of the Sound indicates how many frames
  14. should be played per second. The duration of the sound (in seconds) is the number of frames
  15. divided by the sample rate.
  16. Compression
  17. ---
  18. Sounds can be compressed. Compressed sounds are stored compressed in memory and are decoded as
  19. they are played. This uses a lot less memory but increases CPU usage during playback. OGG and
  20. MP3 are compressed audio formats. When creating a sound from a compressed format, there is an
  21. option to immediately decode it, storing it uncompressed in memory. It can be a good idea to
  22. decode short sound effects, since they won't use very much memory even when uncompressed and it
  23. will improve CPU usage. Compressed sounds can not be written to using `Sound:setFrames`.
  24. Streams
  25. ---
  26. Sounds can be created as a stream by passing `'stream'` as their contents when creating them.
  27. Audio frames can be written to the end of the stream, and read from the beginning. This works
  28. well for situations where data is being generated in real time or streamed in from some other
  29. data source.
  30. Sources can be backed by a stream and they'll just play whatever audio is pushed to the stream.
  31. The audio module also lets you use a stream as a "sink" for an audio device. For playback
  32. devices, this works like loopback, so the mixed audio from all playing Sources will get written
  33. to the stream. For capture devices, all the microphone input will get written to the stream.
  34. Conversion between sample formats, channel layouts, and sample rates will happen automatically.
  35. Keep in mind that streams can still only hold a fixed number of frames. If too much data is
  36. written before it is read, older frames will start to get overwritten. Similary, it's possible
  37. to read too much data without writing fast enough.
  38. Ambisonics
  39. ---
  40. Ambisonic sounds can be imported from WAVs, but can not yet be played. Sounds with a
  41. `ChannelLayout` of `ambisonic` are stored as first-order full-sphere ambisonics using the AmbiX
  42. format (ACN channel ordering and SN3D channel normalization). The AMB format is supported for
  43. import and will automatically get converted to AmbiX. See `lovr.data.newSound` for more info.
  44. ]],
  45. constructor = 'lovr.data.newSound'
  46. }