This is an old revision of the document!
Table of Contents
This document is a user experience narrative for performing musical tasks with Core. It hopes to define a workflow with a specific set of high quality applications in accord with Core's design/development philosophy - simple and small, but powerful. This workflow should be designed to make simple tasks trivial to perform, but not prevent complex tasks from being obscured by lack of power or unnecessary complexity. Where form and function conflict, function will be chosen, but as they are often complementary it is hoped a certain utilitarian beauty will emerge. After all, we are making music. Due to limited scope, this document will only outline essential tasks. Users wishing to perform more specific tasks are encouraged to read the abundant documentation and seek support through proper channels.
Setup/Configure the System
Load set_rlimits, OSS, libffado(optional)
Interfaces, Devices and Soundcards
One of the keys to having a good audio experience with Linux is to make sure your hardware is supported. This is more about making good judgments rather than buying expensive interfaces.
Kernel
- set_rlimits
Choosing a Driver
- OSS
- Alsa
- libffado
Routing Audio and Adjusting Levels
Load Jack-OSS, non-session, non-mixer,
Routing
You can also route audio without Jack. See below. It's important to remember to not load more than one jack extension. Different versions are incompatible and will explode.
Levels
Defining Tasks
A rudimentary mental map of some audio tasks. The boundaries between all these kinds of data/audio are arbitrary. All human music takes this journey, it's endpoints unchanged from the beginning of our making songs to the end.
1)HB ⇒ [X] ⇒ S
2)S ⇒ [X] ⇒ HB
where HB is the human brain, S is the sounds we hear and X is any number of steps as defined below.
which in English is:
1) something in a brain express itself as sound.
2) an audience listens, something in the brain is transmitted.
Sometimes there's a machine which either helps us do physically work or mental work in the middle.
HP = human performance HN = human notation HB = human brain S = sound I = Instrument (physical work aid [work ⇒ wave])
These days, thanks to transistor, CPU's are for all intents & purposes digital. Hence:
ADC = analog to digital converter CPU = Instrument (computational aid) MEM = Computer memory (usu. disk or RAM) DAC = digital audio converter MIC = Microphone (HA to analog signal) SPK = Speaker (analog signal to HA)
Classical Composition
HB ⇒ HN
Performing sheet music.
HN ⇒ HB ⇒ HP ⇒ I ⇒ S
Improvisation.
HB ⇒ HP ⇒ I ⇒ S
Studio
Load non-daw, yoshimi, cuse
HP ⇒ [X] ⇒ MEM
where X is any number of steps.
Recording
HP ⇒ I ⇒ S ⇒ MIC ⇒ ADC ⇒ CPU ⇒ MEM
Production
load non-daw, sox
MEM ⇒ CPU ⇒ MEM
Multi-track Editing
Effects Processing
- sox
sox has a plethora of effects, including but not limited to: sox currently only supports output to OSS or alsa drivers.
Format Conversion
- sox
sox can perform batch conversions with the greatest efficiency.
Computer-Aided Composition
load flabc, abcm2ps, abcmidi, playabc, non-sequencer, btektracker
Scoring
HN ⇒ [X] ⇒ CPU ⇒ [X] ⇒ HN
HN = musical notation
Sequencing
HN = [X] = MEM HN = sequencing pattern.
Loop Sequencer
Step Sequencer
MIDI
HP ⇒ [X] ⇒ MEM ⇒ [X] ⇒ S
Software Instruments
yoshimi, linux sampler
Synths
Drum Machine
Sampler
External Digital Instruments
cuse, a2jmidid(optional)
HP ⇒ I ⇒ CPU ⇒ MEM ⇒ [X] ⇒ S
Performance
load , , ,
In which human parsable audio/data goes into the black box, becomes computer parsable data but remerges as human parsable audio out. As near to real time as humanly possible, oh I mean robotically possible.
Real-time effects
rakkarack
S ⇒ [X] ⇒ CPU ⇒ [X] ⇒ S
audio to audio
Acoustic
Electric
Digital Instruments
Software Instruments
yoshimi, linux-sampler
External Digital Instruments
Live Looping
giada audio to audio
Live Coding
super-collider data(code) to audio
Alternates
- OSS
OSS comes with a builtin mixer with a console interface. Currently no other console mixers fully support the OSSv4 api (they may, however, work with older drivers/soundcards that support the v3 api). ossmix will bring up a list of channels and their current settings. Documentation is here: http://manuals.opensound.com/usersguide/ossmix.html
- aumix
- ecasound-OSS
- sox
sox -d -d <effect> <parameter>
will pipe sound from the default input device to the default output device whilst applying the effect. For further control, you can substitute the names of the devices.
- jack-smf-tools
- timidity++-tui
midi player/synth
- bristol (needs X jack)
- jdkdrum
inputs ascii outputs .wav. takes tested with OSS only http://www.jdkoftinoff.com/main/Free_Projects/Drum_Synth_For_Linux/ This version was compiled with OSS as the sound driver.
- loopercenter (needs fltk, jack)
- mloop (needs ncurses, jack)
- superlooper (needs jack)
- chuck-0SS
Chuck is a strongly-timed on-the-fly audio programming language. To load and play saved chuck programs,
chuck foo.ck where foo is the name of your program.
Livecoding: Start the virtual machine.
chuck --loop
In a text editor of your choice begin coding your chuck program. Save your program.
chuck + foo.ck
in a separate terminal window will add your program to the running virtual machine.