Soundtheory launches the third edition of Gullfoss, called Gullfoss Master. The major update introduces a no-compromise edition of Gullfoss with all quality-related parameters set to maximum. It is specifically designed for mastering engineers requiring the highest precision.

Gullfoss Master 400pxWhile sharing common feature sets, each edition is customized to perform best in each stage of the recording process: Gullfoss Live for tracking, the original Gullfoss for mixing, and now Gullfoss Master to put the final touch on your music.

Gullfoss Master allows for finer parameter adjustments and optimizes the auditory model for small gain changes. It also increases the internal precision, so that the processing noise floor is reduced even further.

All three editions of Gullfoss are now available to users in one easy download:

Gullfoss Master

  • Extended auditory model tuned for mastering
  • Finest parameter precision
  • 20ms latency, higher CPU consumption

Gullfoss Standard

  • Suitable for most mixing and mastering applications
  • 20ms latency, lower CPU consumption
  • Standard auditory model

Gullfoss Live

  • Suitable for live music mixing and tracking
  • Latency below 2ms
  • Minimal treatment of transients, leading to a different sound character

Price & Availability: Soundtheory will offer all three editions in one download for $199. UNTIL THE END OF JULY THERE IS A 30% DISCOUNT.  Use code MASTER30 at checkout at

Current owners of Gullfoss will be able to upgrade at any time for no fee. There is a two week free-trial period with full functionality. New owners can purchase all three versions of Gullfoss at the original price of $199 on macOS and Windows at

About Soundtheory

Soundtheory is the brainchild of mathematical physicist Andreas Tell, who has been researching and working with sound for over 20 years. Along with Managing Director David Pringle and Development Director Andreas Beisler, Soundtheory has developed exciting new and unique methods for realtime audio processing. Its highly advanced model of computational auditory perception opens up new possibilities of analyzing sound as perceived by human ears and of processing it without introducing any audible artefacts at all.