This is an old revision of the document!


This document is a user experience narrative for performing musical tasks with Core. It hopes to define a workflow with a specific set of high quality applications in accord with Core's design/development philosophy - simple and small, but powerful. This workflow should be designed to make simple tasks trivial to perform, but not prevent complex tasks from being obscured by lack of power or unnecessary complexity. Where form and function conflict, function will be chosen, but as they are often complementary it is hoped a certain utilitarian beauty will emerge. After all, we are making music. Due to limited scope, this document will only outline essential tasks. Users wishing to perform more specific tasks are encouraged to read the abundant documentation and seek support through proper channels.

Setup/Configure the System

Load set_rlimits, OSS, libffado(optional)

Interfaces, Devices and Soundcards

One of the keys to having a good audio experience with Linux is to make sure your hardware is supported. This is more about making good judgments rather than buying expensive interfaces.

Kernel

  • set_rlimits

Choosing a Driver

  • OSS
  • Alsa
  • libffado

Routing Audio and Adjusting Levels

Load Jack-OSS, non-session, non-mixer

Routing

You can also route audio without Jack. See below. It's important to remember to not load more than one jack extension. Different versions are incompatible and will explode.

Levels

Defining Tasks

A rudimentary mental map of some audio tasks. The boundaries between all these kinds of data/audio are arbitrary. All human music takes this journey, it's endpoints unchanged from the beginning of our making songs to the end.

1)HB ⇒ [X] ⇒ S

2)S ⇒ [X] ⇒ HB

where HB is the human brain, S is the sounds we hear and X is any number of steps as defined below.

which in English is:

1) something in a brain express itself as sound.

2) an audience listens, something in the brain is transmitted.

Sometimes there's a machine which either helps us do physically work or mental work in the middle.

HP = human performance HN = human notation HB = human brain S = sound I = Instrument (physical work aid [work ⇒ wave])

These days, thanks to transistor, CPU's are for all intents & purposes digital. Hence:

ADC = analog to digital converter CPU = Instrument (computational aid) MEM = Computer memory (usu. disk or RAM) DAC = digital audio converter MIC = Microphone (HA to analog signal) SPK = Speaker (analog signal to HA)

Classical Composition

HB ⇒ HN

Performing sheet music.

HN ⇒ HB ⇒ HP ⇒ I ⇒ S

Improvisation.

HB ⇒ HP ⇒ I ⇒ S

Studio

Load non-daw, yoshimi, cuse

HP ⇒ [X] ⇒ MEM

where X is any number of steps.

Recording Performances

HP ⇒ I ⇒ S ⇒ MIC ⇒ ADC ⇒ CPU ⇒ MEM

Digital Instruments (MIDI/OSC)

    S
    |

HP ⇒ I ⇒ ADC ⇒ CPU ⇒ MEM

Software Instruments

Synths
Drum Machines

Computer-Aided Composition

load flabc, abcm2ps, abcmidi, playabc, non-sequencer, btektracker

HN ⇒ [X] ⇒ CPU ⇒ [X] ⇒ HN

Scoring

Sequencing

HN ⇒ [X] ⇒ MEM HN = sequencing pattern.

Loop Sequencers

==Trackers

Production

load non-daw, sox

MEM ⇒ CPU ⇒ MEM

Multi-track Editing

Effects Processing

  • sox

sox has a plethora of effects, including but not limited to: sox currently only supports output to OSS or alsa drivers.

Format Conversion

  • sox

sox can perform batch conversions with the greatest efficiency.

Performance

load yoshimi, linux-sampler, rakkarack, giada, super-collider

In which human parsable audio/data goes into the black box, becomes computer parsable data but remerges as human parsable audio out. As near to real time as humanly possible, oh I mean robotically possible.

Accompaniment

date to audio

Synths
Drum Machines
Sampler

Real-time effects

S ⇒ [X] ⇒ CPU ⇒ [X] ⇒ S

audio to audio

Live Looping

audio to audio

Live-Coding

data(code) to audio

Alternates

  • OSS

OSS comes with a builtin mixer with a console interface. Currently no other console mixers fully support the OSSv4 api (they may, however, work with older drivers/soundcards that support the v3 api). ossmix will bring up a list of channels and their current settings. Documentation is here: http://manuals.opensound.com/usersguide/ossmix.html

* a2jmidid

Alsa midi to Jack midi

  • aumix
  • ecasound-OSS
  • sox

sox -d -d <effect> <parameter>

will pipe sound from the default input device to the default output device whilst applying the effect. For further control, you can substitute the names of the devices.

  • jack-smf-tools
  • jdkdrum

inputs ascii outputs .wav. takes tested with OSS only http://www.jdkoftinoff.com/main/Free_Projects/Drum_Synth_For_Linux/ This version was compiled with OSS as the sound driver.

  • Epichord(needs fltk2, jack)

sequencer/tracker

  • timidity++-tui

midi player/synth

  • arpage (needs jack)

arpeggiator

  • bristol (needs jack)
  • mloop (needs jack)
  • loopercenter (needs fltk, jack)
  • superlooper (needs jack)
  • chuck-0SS

Chuck is a strongly-timed on-the-fly audio programming language. To load and play saved chuck programs,

  chuck foo.ck
  where foo is the name of your program.
  

Livecoding: Start the virtual machine.

  chuck --loop

In a text editor of your choice begin coding your chuck program. Save your program.

  chuck + foo.ck 

in a separate terminal window will add your program to the running virtual machine.

Print/export
QR Code
QR Code wiki:audio_workstation (generated for current page)