Gakken Newsletter #153


Gakken E-mail Newsletter #153 arrived today. It contains some of the information that’s starting to show up on the Otona no Kagaku facebook page, and announces the update of the “next up” page on the Otona no Kagaku site. I expect that there will be more updates at both locations in the next few days.

Part 1 describes the development department’s attempts to create a 2-legged (bipedal) robot that would be interesting for adults to build, while keeping the price down. After the release of the Jansen mini-Beest kit (#30), the team decided to approach the task with another Jansen linkage. Inadvertently, the final result came out resembling a penguin walking, so Jansen himself named it Animaris Imperio (“imperio” being the Latin for “penguin”). The mook will explore the history of bipedal robots, and include an interview with the mechanical designer for the Gundam series.
2,950 yen, 84-page mook. Dec. 14 release date.

Part 2
Gakken just released kit 3 in the Puchi Handmade (Small Handmade) series on Nov. 30th. Called the “Finland Barley Straw Mobile, Build Himmeli“, the kit is aimed at girls that want to create geometric hanging works of art. Examples can be found at the Noinoko webpage. Himmeli are traditional Finnish Christmas decorations made from straw. More examples can be found by visiting Tokyo Midtown right now. (At the moment, there’s no entry for the Himmeli mook kit on the Otona no Kagaku main page.)
1,680 yen. 34 page mook. Supplies include 50 17cm straws, 15m of thread and tool.

Part 3
Book two in the Science Live mook series will come out on Dec. 10. 144 pages, 2,100 yen. Entitled Higgs Particle – The Start of the Universe and Matter.

Part 4
The Maker Faire Tokyo will be held at the Mirai-kan museum in Odaiba, Tokyo, Dec. 1st and 2nd. 1000 yen for adults in advance, 1,500 the day of the event.

Java Synthesizer, Part 19, Scripting


I am finally where I wanted to be when I started this synth project a month ago – with a working user-configurable synth program.  As I mentioned before, the biggest advantage to a software-based synth is that it’s almost infinitely configurable. The drawback is that there’s going to be a delay from when you press a key to when the sound comes out of the speakers. Depending on what Windows is screwing up in the background at any given moment, the delay is a tenth of a second up to 1 second. No idea how this app would perform on a Mac, but a Linux box has got to be an improvement. There’s also occasional clicking when a net browser is open. So, the best results will be with all other windows closed. Regardless, it’s a low-cost way to play with electronic sound.

A very simple circuit that still produces interesting results is the following: the ADSR generates a gate signal during each of the phases, and I’m using attackE (attack event) to provide a gate for the noise generator so there’s a brief burst of noise at the beginning of the envelope. osc1 creates the main tone, with the frequency set by the keyboard keys. ocs2 (also called the gate oscillator) creates a slow squarewave that is mixed with the keyboard gate signal to arpeggiate the adsr. The output of osc1 is mixed with the noise1 output and sent to the ADSR input. The ADSR output is split, with one line going to mixer 2 and the other to the input of echo1. The echo1 output is mixed in with the ADSR output and the result is sent to vca1 (which acts as volume control for the circuit). The amplified signal goes to the main box1 object fixed connection pin, which is invisible between vca1 and the filter, vcf1. The filter applies the FFT to the signal, which is sent to pan1 and then finally to the speakers.

This is the script file, which will eventually become a saved patch file:

///////////////////////////////////
//
// Simple ADSR with noise, pan, echo and filter.
//
// Uses: osc1      – Main tone.
//       lfo1      – Gating for ADSR.
//       mix1      – Boolean AND, taking keyboard and lfo1 signals for adsr1 gate in.
//       mix2      – Audio mixer, for osc1 out and noise1 out.
//       mix3      – Audio mixer, for echo1 out and adsr1 out.
//       split1    – Audio splitter, for signals to echo1 in and vca1 in.
//       noise1    – Noise generator, running out during ADSR attack phase.
//       adsr1     – Main envelope generator.
//       echo1     – Echo line, attached between adsr1 and vca1.
//       vca1      – Main signal volume control for ADSR output.
//       vcf1      – Filter.
//       pan1      – Panning effect just prior to the speakers.
//
/////////////////////////////////////

new osc osc1 (100, 1, 0.5, true)
pin (50, 1000, 512, 256, Freq.,    freq)
pin (0,     4,   4,   0, Waveform, waveform)
pin (0.0, 1.0, 100,  49, Ratio,    ratio)
pin (0,    50,  50,   0, Glide Smoothness, smooth)
pin (0,    50,  50,   0, Glide Width,      width)
pin (0,     1,   1,   0, Enable Glide,     enableGlide)
pin (0,     1,   1,   1, Gate,             gate)
new osc lfo1 (2, 1, 0.5, false)
pin (0.1,  20, 100,   2, Freq.,    freq)
pin (0,     4,   4,   1, Waveform, waveform)
pin (0.0, 1.0, 100,  49, Ratio,    ratio)
pin (0,    50,  50,   0, Glide Smoothness, smooth)
pin (0,    50,  50,   0, Glide Width,      width)
pin (0,     1,   1,   0, Enable Glide,     enableGlide)
pin (0,     1,   1,   1, Gate,             gate)
new ADSR adsr1 (4000, 4000, 0.4, 4000)
pin (0, 8000, 512, 255, Attack,   attack)
pin (0,  600, 100,  10, Punch,    punch)
pin (0, 8000, 512, 255, Decay,    decay)
pin (0,  1.0, 400, 200, Sustain,  sustain)
pin (0, 8000, 512, 255, Attack2,  attack2)
pin (0,  1.0, 400, 200, Sustain2, sustain2)
pin (0, 8000, 512, 255, Release,  release)
pin (0,    1,   1,   0, Invert,   invert)
pin (0,    1,   1,   1, Gate,     gate)
new vca vca1 (16000, 0.0)
pin (0, 16000, 512, 256, Amp., amp)
new mixer mix1 (2, 0.0, 0.0, 2)
new mixer mix2 (2, 1.0, 1.0, 0)
pin (0.0, 1.0, 100, 100, Volume,     amp)
pin (0.0,   3, 100,  60, Avg. Comp., comp)
new mixer mix3 (2, 1.0, 1.0, 0)
pin (0.0, 1.0, 100, 100, Volume,     amp)
pin (0.0,   3, 100,  60, Avg. Comp., comp)
new splitter split1 (2, 0)
new noise noise1 (0, 10, 0.4)
pin (0,   3,   3,  0, Mode,          mode)
pin (0, 100, 100, 10, Density,       density)
pin (1, 100,  99, 49, Brownian Max., brownian)
pin (0, 1.0, 100, 20, Volume,        amp)
pin (0,   1,   1,  1, Gate,          gate)
new echo echo1 (32000, 10000, 0.75, 1)
pin (100, 32000, 512, 255, Loop Length, loop)
pin (10,  16000, 512, 255, Delay Max.,  delay)
pin (0,     0.9, 100,  49, Decay,       decay)
pin (0,       1,   1,   1, echoOn,      echoOn)
new vcf vcf1 (512, 0, 500, 0.5)
pin (0,  512, 512, 512, Cutoff,    filterLevel)
pin (0,    2,   2,   0, Mode,      mode)
pin (0,    7, 512, 256, Threshold, threshold)
pin (0,  1.0, 100,  50, Mult.,     mult)
pin (0,    1,   1,   1, Filter On, filterOn)
new pan pan1 (true, 0.5, 1.0, 0.5)
pin (0.1, 10, 100, 10, Pan Rate,     rate)
pin (0,    4,   4,  0, Pan Waveform, waveform)
pin (0,  1.0, 100,  49, Magnitude,   mag)
pin (0,  1.0, 100,  49, Offset,      offset)
pin (0,    1,   1,   1, panOn,       panOn)

connect control.0 vca1.amp
connect control.1 osc1.freq
connect control.2 osc1.waveform
connect control.3 lfo1.freq
connect control.4 lfo1.ratio
connect control.5 echo1.delay
connect control.6 echo1.decay
connect control.7 adsr1.attack
connect control.8 adsr1.decay
connect control.9 adsr1.sustain
connect control.10 adsr1.release
connect control.11 vcf1.filterLevel
connect control.12 vcf1.mode
connect control.13 vcf1.threshold
connect control.14 vcf1.mult
connect control.15 pan1.rate
connect control.16 pan1.waveform
connect control.17 pan1.mag
connect control.26 osc1.toggleGlide
connect control.27 echo1.toggle
connect control.28 pan1.toggle

connect kbd.gate mix1.1
connect lfo1.gateOut mix1.2
connect mix1.out adsr1.gate
//connect kbd.note osc1.freq
connect osc1.out mix2.1
connect noise1.out mix2.2
connect mix2.out adsr1.in
connect adsr1.out split1.in
connect adsr1.attackE noise1.gate
connect split1.1 mix3.1
connect split1.2 echo1.in
connect echo1.out mix3.2
connect mix3.out vca1.in
connect vca1.out box.fixed

/////////////////////////

Internally, osc1 has its glide feature, which provides a sliding up/down effect between keys. Combined with the filter, echo and pan, plus the little splash of noise and the ADSR envelope shaping, there’s a lot of fun to be had with this one arrangement. The only real complaint anyone may have is the amount of work involved in programming the slider controls. However, it’d be a simple matter to build up a library of module/pin ranges and then just copy-paste them into a new circuit. I’m debating adding menu options for implementing canned modules with the pin ranges predetermined, since the patch file could be hand-tweaked later.  Another option is to just build up a collection of circuits with different configurations and load the one I want to play with at the time.

Raw formatted script file here.

 

Java Synthesizer, Part 18, Script Interface


Hooboy, where to begin…

My plan for a user-definable synth circuit design was premised on the idea of using C-style pointers to functions. If you look at the last entry, in the section on addBuffer(), I’d turned the module methods into simple calls passing the output of one method to the input of the next. In C, or C++, I’d make an array of pointers to those methods, and it would be a relatively simple matter of changing pin wiring dynamically via the array assignments. So I went to one of the Java forums and asked what the Java equivalent approach is. I got a total of two responses, the first telling me that Java doesn’t have anything like pointers to functions and he couldn’t help me without more details regarding exactly what I was trying to do. So I typed up an example pseudo code that was deliberately inelegant to make my request clear. The second responder simply complained about the formatting of the pseudo code. So, as I’d ranted in an earlier entry, the “professionals” are never of any help when I go to the forums.

With no other choice, I approached the user-entered script-based synth circuit design as something to do with arrays (actually, ArrayLists) and I returned to my hardware roots. Like all of the modules, the interconnects have an electronics hardware origin, this time the concept of a patch board, telephone operator’s switchboard, or circuit backplane.

The key now is the new allTypes class. Java doesn’t have an easy way to test if data is of a specific primitive type (boolean, int, double) or just a character string. Additionally, method overlays only look at the method signature, that is, the input parameters of each method. Methods that return boolean or double results but don’t take parameters just generate “duplicate method name” exceptions. For these reasons, I put members for each of the data types I need (boolean, int, double and string) together with overlay methods for reading and writing those members. To get around the “signature limitation” for reading data, I added a dummy input parameter to each implementation of read(). I don’t use the dummy, it’s just there to trick Java into using the correct overlay on a per module pin basis (where one pin needs a boolean and another needs an int, and both call allTypes.read() identically).

Once I had allTypes, I could create an ArrayList called connections. The first 30 items in connections are reserved for the jSliders and A-300 dial controls. Any new connections between modules (i.e. – osc1.out to adsr1.in) are added to the end of the ArrayList. In addition, I created a box class, which groups together the fixed elements of the synth, including keyboard gate out, keyboard note out, and speakers in. Because the VCF filter module needs all 320 samples pre-generated before performing the FFT, and the pan module operates on the VCF output, I included references in box to any vcf or pan object that the user creates. If either object is undefined by the user, the addBuffer() method just skips over them and simply sends the data to the next step, or directly to the sound engine.

In order for the modules to read and write to the backplane, I had to add a lot of extra code to each. The inpins[] and outpins[] arrays contain the names of both sets of pins as they will be specified by the user for making connections. The connectorInPins[] and connectorOutPins[] arrays contain the index number into the connections ArrayList based on the user wiring. That is if the user specifies that control 0 connects to osc1 frequency in, and osc1 out connects to ADSR1 in, then osc1 connectorInPins[0] = 0, osc1 connectorOutPins[0] = 30 (the first new connection added to the ArrayList) and adsr1 connectorInPins[0] = 30.

As may be expected, I had to completely rewrite the displayCircuitValues() and processScreenInputs() methods to either process data from connections, or from the module range members. Now, data from the jSliders, jTextfields and A-300 dials will either go to connections directly if the user is on the master screen (display screen 0), or to the module variables for the module-specific data entry screens (using the previous setStr() methods). updateFromKeyboard() had to be rewritten to address the new code for processScreenInputs() and to deal with commands from the drum pads for toggling the  osc.glide, adsr.gate, echo.onOff and vcf.onOff flags. Then, to actually communicate with the backplane, each module needed readConnector() and writeConnector() methods, using the connectorInPins[] and connectorOutPins[] arrays to link the module variable (freq, rate) to the connector index number for each wired pin.

As I moved on to addBuffer(), I realized that I needed one more module. Just as the mixer() object is designed to take a variable number of inputs and turn them into one simple pin for wiring purposes (connecting osc1.out, noise1, out and echo1.out to adsr1.in) I had to have a splitter that would output the same signal to more than one input pin. Specifically, splitting the gate signal (keyboard.gate + osc2.out -> mixer1.in) for adsr1.gateIn and noise1.gate; and the adsr1.out audio signal for vca1.in and echo1.in. The splitter class is really just an ArrayList of a variable number of allTypes objects that store whatever appears at the input pin. This way, when the splitter output is read, the same data goes to the backplane regardless of exactly which pin it is.

(Example of how the backplane works with the modules and controls.)

AddBuffer() got renamed to processSound(), and was greatly simplified. Now, it’s just a plain for-loop that reads the backplane (connections ArrayList) for each module (osc1, osc2, adsr1, vca1, etc.) and then writes new waveform data back out to it on a per-sample basis. Once the entire 320 samples are loaded into the buffer, it’s given to the vcf (if specified by the user) and then to pan (if specified by the user). Verifying the wiring of the user’s synth circuit is then left to the user.

private void processSound() {
ByteBuffer    byteBuffer;
ShortBuffer   shortBuffer;
byteBuffer  = ByteBuffer.wrap(audioBuffer[audioBufferPtr]);
shortBuffer = byteBuffer.asShortBuffer();

double [] waveform = new double[AUDIO_SLICE_SHORT];

connections.get(box1.keyBdGate).b = keyBoard.gateOut();
connections.get(box1.keyBdNote).d = keyBoard.noteOut();
for(int dataPointCnt = 0; dataPointCnt < AUDIO_SLICE_SHORT; dataPointCnt++){
for(objectNamePair onp : modulesList) {
if(onp.obj != null) {
((module) onp.obj).readConnector();
((module) onp.obj).writeConnector();
}
waveform[dataPointCnt] = connections.get(box1.fixed).d;
}
}

if(box1.haveVCF()) waveform = box1.vcfModule.applyFilter(waveform);

if(box1.havePAN()) {
box1.panModule.buildPan();
for(int i=0; i<AUDIO_SLICE_SHORT; i++) {
shortBuffer.put( (short) (waveform[i] * box1.panModule.panBufferR[i]) );
shortBuffer.put( (short) (waveform[i] * box1.panModule.panBufferL[i]) );
}
} else {
for(int i=0; i<AUDIO_SLICE_SHORT; i++) {
for(int j=0; j<AUDIO_CHANNELS; j++) {
shortBuffer.put( (short) waveform[i] );
}
}
}

System.arraycopy(audioBuffer[audioBufferPtr], 0, audioData, 0, AUDIO_SLICE_BYTES * AUDIO_CHANNELS);
audioBufferPtr = (audioBufferPtr == 0) ? 1 : 0;                             // Switch double buffer
gotData = true;                                                             // Tell sound engine to play sample
}

Which brings us to just exactly how the user enters the circuit.

I added a new textfield (jTextfield13) for command line input, with status messages and connection results going to jTextArea1.

parseCommands() – main string parser
parseCommandsSub() – used for recursion (Java doesn’t like recursion)
findModule() – new method for getting modulesList index number
addNewModule() – adds module to moduleList
addNewRange() – adds pin range data for each module

The new user commands are:

new – specifies a new module (i.e. – new osc osc1)
pin – gives the pin name and range (pin (50, 1000, 512, 256, Freq., freq))
connect – connects two pins (connect control.0 vca1.amp)
break – disconnects the specified pin (break vca1.amp)
grab – takes everything in jTextArea1 as batchfile input
killcircuit – deletes the existing circuit to start over (killcircuit)
List – lists command history, connections, controls and wiring.
help, ? – Lists the commands, special pin names, and individual command syntax

I don’t have patch file open or save implemented yet, so entering commands all the time to test the program got tedious fast. I added the grab command so that I can put the circuit wiring into a textfile and just copy it from notepad to jTextArea1 each time. Adding new menu bar items (open patch file, new circuit, list command history) will be trivial now because I can just pretend that most of the menu items are user input to the string parser.

A sample synth circuit:

new osc osc1 (100, 1, 0.5, true)
pin (50, 1000, 512, 256, Freq., freq)
pin (0,     4,   4,   0, Waveform, waveform)
pin (0.0, 1.0, 100,  49, Ratio, ratio)
pin (0,    50,  50,   0, Glide Smoothness, smooth)
pin (0,    50,  50,   0, Glide Width, width)
pin (0,     1,   1,   0, Enable Glide, enableGlide)
pin (0,     1,   1,   1, Gate, gate)

new vca vca1 (16000, 0.0)
pin (0, 16000, 512, 256, Amp., amp)

connect control.0 vca1.amp
connect control.1 osc1.freq
connect control.2 osc1.waveform

connect kbd.gate osc1.gate
// connect kbd.note osc1.freq  // Note used if A-300 keyboard not connected
connect osc1.out vca1.in
connect vca1.out box.fixed

The syntax for the pin range is the same as for the Java class:
(range min, range max. # steps, starting slider value, textfield string, pin real name)

“//” is used to comment out commands that I may want later. I can also use it for documenting the patch file to remind myself of what each module is for.

It took me 10 days to get this far. Easily the most time-consuming part of the entire app.

Raw formatted full Java listing here.

 

50 Famous People – Eiji Tsuburaya


(All rights belong to their owners. Images used here for review purposes only.)

Finally, a volume of the 50 Famous People that I really enjoyed reading. Eiji Tsuburaya was the special effects expert (later, SFX director) at Toho Studios, responsible for creating Godzilla and Ultraman for films and TV. According to the wiki film credits, he also handled SFX for Akira Kurosawa’s Hidden Fortress. Born in 1901 in Fukushima (the prefecture with the nuclear reactor that melted down following the big quake in 2011), Eiji’s mother died when he was 3 and his father abandoned the family soon after (the mook doesn’t give further details; the wiki entry claims he went to China to take over the family business). He was raised by his uncle and his grandmother. At age 9, he became enamored with airplanes (the Wright Brothers had completed their first flight only 4 years earlier) and started making his own models out of wood with only photos to guide him. He was good enough at it to attract local news reporters. At 14, he graduated school and went to Tokyo on his own to learn to become a pilot. However, the school he enrolled in only had one plane and one instructor, and in less than one year the instructor died in a crash and the school closed. Instead, Eiji moved to Kanda (near Akihabara) to enter an electronics school there. To raise money for tuition, he started working part time at a toy company, and his design for a kick scooter turned out to be fairly popular. Then, during a company hanami (Spring cherry blossom viewing) drinking party, the toy company set up near the party site for a film company and the two groups almost got into a fight. Eiji intervened, and the second group offered him a job. He trained as a cameraman, and earned a living that way from age 18 to 32. In 1933, he saw King Kong, and vowed to become a movie special effects master. This led to his working on Hawai Mare oki Kaisen, a war movie featuring aerial battles Eiji staged using miniatures. This was followed in 1954 by Godzilla, and Ultraman Q (the predecessor to the Ultraman franchise, which he has sole credit for creating). He died in 1970 at age 68 of heart failure. As well as having won numerous awards for his special effects, he’s named as an influence on Steven Spielberg and George Lucas, as well as having invented the “Toho Versatile System” optical printer for widescreen pictures.

The intro manga has Mohea being chased by two monsters and saved by Merrino, Mami and Utako. Turns out she’s just acting in Youichi’s and Daichi’s reenactment of an Ultra Seven movie scene, and is scarily realistic in pretending to be frightened. When told of the movie’s background, Mohea summons her spaceship to fly out to planet M-78 to thank Ultraman in person, as Youichi and Daichi shout out that this is just fiction. Merrino is so frustrated at being ignored (wanting to be a hero himself) that he orders Study Bell to start the lesson. In the end, Merrino, Youichi and Daichi prepare to face off in a rubber suit battle, but Mohea intervenes and tells them to play nice together – doing cat’s cradles with yarn (not an easy task if you have crab’s claws for hands).

The main manga is by Daisuke Higuchi, a female artist known for Go Ahead and Dokushi (which is still on-going in Comic Birz magazine). The artwork is very good, and Eiji almost looks like his photos (he’s not too westernized). The story is a bit preachy and shojo-ish, but not overwhelmingly so. It starts out in 1954, with Eiji, age 53, on the set with a miniature version of Tokyo, blocking out the shot for a scene for his new Godzilla movie. He’s describing how he wants the power towers to look, and gets into an argument with a young effects technician that claims that none of these things are possible. (I’m not sure if the tech is made up for the story, but a sidebar states that Eiji had nicknamed him “Denchi”, using the kanji for “electricity”). “Den” storms off, and Eiji is called over to the camera because the closeup of the Godzilla hand puppet isn’t convincing enough. Eiji asks for a pane of glass, draws some lines on it, puts it in front of the camera and voila – instant perspective with the puppet behind some power wires. Den meets up with another tech working on an outdoor set in a different sound stage, where the group is ridiculed by workers from Toho’s samurai drama division (at the time, Toho specialized in period dramas, and the special effects division was looked down on as “boys playing with toys”). Den is insulted and his partner tells him he sounds just like Eiji. The two go to the screening room, where Den gets to see “Hawai Mare” for the first time, with the realistic-looking aerial battles. The friend explains that scenes with one plane were shot with an operator holding a model suspended by piano wire from a bamboo pole. In squadron shots, the planes were suspended by fixed wires and the background set moved behind them, pushed along tracks by two technicians. Excited, Den returns to work on his power tower. Eiji had a practice of buying coffee for anyone he had a fight with, and the two of them sit down to discuss how to pull off the sequence where the tower melts from Godzilla’s breath attack.  When the final film is screened, the two guys that had scoffed earlier are scared out of their pants, which is the reaction Eiji had been striving for. The narrator comments that Eiji paved the way for the generations of SFX masters that came after him.

The textbook section details Eiji’s upbringing and career path given above. The last two pages are a listing of his most famous scenes (the power tower melting in Godzilla, the cocoon opening sequence on Tokyo Tower in Mothra, the volcano scene in Rodan, among others) plus brief descriptions of the monsters he made for his Ultraman franchise. Explanations of some of the effects tricks include shooting flying planes with the set upside down so the wires holding the planes from the bottom won’t be seen, shooting volcanoes upside down so the “billowing” smoke will hit the floor and spread out, and spreading gelatin along the pool floor to simulate the sea’s surface for ocean battles.

TCG cards include: I. H. N. Evans, Frances Hodgson Burnett, Alexander Graham Bell, Rama V, Van Gogh, Antoni Gaudi, Kang Youwei, Woodrow Wilson and Robert Edwin Peary.

Tsuburaya was a very talented man, with a non-linear job history. He deserves wider exposure to western audiences. This mook is highly recommended.

Java Synthesizer, Part 17, Cleanup 2


If you’re a professional programmer, odds are that most of your code is coming out of a CAD system, so you don’t really need to write many lines from scratch. Even if you’re not using CAD, you still probably have a significant design phase where you do all your planning on paper first, and then the coding is just a matter of transcribing from the plan to the development environment. You don’t need to worry much about what you’re doing during the actual typing phase because everything’s already been laid out for you in advance.  Contrast this with the hobbyist, who is experimenting as they go along, learning with each mistake and spending hours on debug because they forgot something obvious.  The thing to remember about hobbyists is that most of what they create is for themselves and is never intended for a commercial market.  They don’t need the rigid rules and design requirements that “professionals” lean on. “Good enough” really is good enough.

I say this because I’ve been ripped by “professional” engineers accusing me of not knowing all the specs for something I’m working on, when what I really want is for someone to answer a question regarding implementation that they apparently themselves aren’t able to answer.  Be that as it may, I get the feeling that I’m trying stuff in Java that’s not really well documented (outside of some overpriced commercial training course), and that means I occasionally find myself rewriting different sections as I go along, without outside help. I either end up adding functionality that I hadn’t thought I’d need previously, or because I finally found a better way to get the job done.  Such is the case now.

I’m slowly working my way towards the user-programmable synth interface I’d mentioned before, and I’m trying to get the various existing modules ready for the Java equivalent of “pointers to a function”. I’m also implementing each of the earlier ideas I’d had for modules and module operating modes (such as attack2, sustain2 and invert for the ADSR). I’m now at a resting point before tackling a user-input script parser (I don’t know if I want to address a graphics-based schematic patch approach or not. Seems like overkill for my needs.)

So, what’s changed recently?

First, I turned the A-300 Pro MIDI parser section into an action listener. It doesn’t seem to have made any impact on the clicking from the sound engine, but it’s the first step to putting addBuffer() into its own listener, and it removes the workload on the timer method.

Second, I focused more on the circuit object. This is the object that contains pointers to each of the modules for determining which data entry screens I’m going to get for the various modules so I can adjust settings using the jSliders on the screen (or the MIDI controls on the A-300). Each new item in circuit represents a separate module (osc1, osc2, etc.) and a separate data entry screen. There’s one “main screen”, screen 0, where I can change the settings for the most important module items all in one place (i.e. – circuit volume, osc1 frequency and waveform and the ADSR attack, decay, sustain and release). circuit will be the starting point for the user wiring script parser.

Example:

k = 0;
circuits.add(new objectNamePair(osc1, “Main Panel”));
circuits.get(k).ranges.add(new range(0, 16000, 512, 256,    “Volume”,        “d”));
circuits.get(k).ranges.add(new range(50, 1000, 512, 256,    “Osc1 Freq.”,    “d”));
circuits.get(k).ranges.add(new range(0,     4,   4,   0,    “Osc1 Waveform”, “i”));
circuits.get(k).ranges.add(new range(0.1,  20, 512, 256,    “Osc2 Freq.”,    “d”));
circuits.get(k).ranges.add(new range(0.0, 1.0, 100,  49,    “Osc2 Ratio”,    “d”));
circuits.get(k).ranges.add(new range(-2.0,  2.0, 400, 199,  “Bender Offset”, “d”));
circuits.get(k).ranges.add(new range(0,    512, 512, 512,   “Filter”,    “d”));
circuits.get(k).ranges.add(new range(0, 8000, 512, 255,     “Attack”,  “d”));
circuits.get(k).ranges.add(new range(0, 8000, 512, 255,     “Decay”,   “d”));
circuits.get(k).ranges.add(new range(0,  1.0, 400, 200,     “Sustain”, “d”));
circuits.get(0k).ranges.add(new range(0, 8000, 512, 255,     “Release”, “d”));
int k++;
circuits.add(new objectNamePair(osc1, “osc1”));
circuits.get(k).ranges.add(new range(50, 1000, 512, 256, “Freq.”,            “d”));
circuits.get(k).ranges.add(new range(0,     4,   4,   0, “Waveform”,         “i”));
circuits.get(k).ranges.add(new range(0.0, 1.0, 100,  49, “Ratio”,            “d”));
circuits.get(k).ranges.add(new range(0,    50,  50,   0, “Glide Smoothness”, “i”));
circuits.get(k).ranges.add(new range(0,    50,  50,   0, “Glide Width”,      “i”));
circuits.get(k).ranges.add(new range(0,     1,   1,   0, “Enable Glide”,     “b”));
circuits.get(k).ranges.add(new range(0,     1,   1,   0, “Gate”,             “b”));
k++;

Third, I added a “module” class that all the other modules inherit from, for standardizing calls to .setStr(), .strVal(), .toggle() and .toggleGlide() for the osc, vca, adsr, etc. classes. Which means I also had to add those methods to each of the other classes.

abstract class module {
abstract void   setStr(int idx, String str);
abstract String strVal(int idx);
abstract void   toggle();
abstract void   toggleGlide();
}

=========

class osc extends module {

@Override void toggle() {
gate = (! gate);
}
@Override void toggleGlide() {
enableGlide = (! enableGlide);
}
@Override String strVal(int idx) {
String ret = “Invalid variable”;
switch (idx) {
case 0: ret = Double.toString(freq);
break;
case 1: ret = Integer.toString(waveform);
break;
case 2: ret = Double.toString(ratio);
break;
case 3: ret = Integer.toString(glideSmoothness);
break;
case 4: ret = Integer.toString(glideWidth);
break;
case 5: ret = Boolean.toString(enableGlide);
break;
case 6: ret = Boolean.toString(gate);
break;
}
return(ret);
}
@Override void setStr(int idx, String str) {
if(! str.trim().isEmpty()) {
switch (idx) {
case 0: setFreq(Double.parseDouble(str));
break;
case 1: waveform        = (int) Double.parseDouble(str);
break;
case 2: ratio           = Double.parseDouble(str);
break;
case 3: glideSmoothness = (int) Double.parseDouble(str);
break;
case 4: glideWidth      = (int) Double.parseDouble(str);
break;
case 5: enableGlide     = str2bool(str);
break;
case 6: gate            = str2bool(str);
break;
}
}
}
}
Fourth, I decided to embed a VCA module in the noise module. The reason for this is that noise is one of those features where you really don’t want it at 100% volume. Ever. So, since I’d be dedicating a VCA to attenuating the noise output anyway, it might as well be within the noise class itself. Another benefit is that it simplifies the circuit wiring section later. (The pan class also has its own vca.)

Fifth, as mentioned above, I went ahead and added attack2 and sustain2 to the ADSR, which activate when the user lets go of the keyboard key, prior to going into the release phase. As I started adding the logic for what to do if attack2 is 0, I realized that my earlier approach of tracking each phase to avoid introducing dead cycles and causing clicking was just giving myself more work. It’s easier to just use a “while(! done)” loop with a switch-case to walk through each ADSR phase with 0 length and stop at the beginning of the first non-zero phase (e.g., if attack, pitch and decay are 0, start at sustain with a count of 0, and set done = true). Now, I can very easily add a decay2 phase if I want and I don’t have to worry about remembering what my intended logic was. Anyway, I ripped out the old code and replaced it with the “while(! done)” loop. I followed this up with an invert toggle, which turns the sound envelope upside-down.

Sixth, I moved the VCF ifs and for-loops that had been in addBuffer() into the vfc class.  Again, this is for simplifying the addBuffer() code when I get to user-programmable synth circuits. I also wanted to add some new filtering techniques, so there are 2 new modes. One decays the FFT frequency bins based on magnitude thresholds (if magnitude < 50,000, decay that bin by 80%), and the second combines the existing frequency filtering with the new mode 1. Because some of the magnitudes can get really big, I converted the magnitude threshold check to log base-10. So, the threshold range is just 0 to 7.0.

Seventh brings me to the first really new piece of code – the mixer module. Unlike the Java system mixer (which I unsuccessfully played with in an earlier blog entry), this mixer is designed specifically to ease the synth design. There are certain places in the synth where two or more signals go to the same point, examples being keyBoard.gateOut and osc2.gateOut driving adsr1.gateIn; and osc1, echo1 and noise1 all going to adsr.dataIn. Rather than trying to figure how to softcode this from the GUI, I created the mixer class to take multiple inputs, average them and run them through a VCA before returning them from mixer.out.

(Example circuit showing audio-type mixer at ADSR input.)

One of the drawbacks to simply averaging signals is that you get amplitude loss. Say you want osc1 to output a sinewave from +/-1, the noise generator outputting +/- 0.4, and echo1 feeding back the signal at +/- 0.6. The final total signal will be +/- 0.66. If the ADSR outputs to vca1 with a maximum volume of 16,000 (when the input is 1.0), then the total averaged signal going to the sound engine will be 10,666, instead of the originally desired 16,000, and it sounds weaker. I could boost vca for a range from 0 to 24,000, but I’d be altering vca1’s range with every new circuit patch. It makes more sense to put a dedicated vca inside the mixer class to compensate for averaging regardless of the number of pins involved.

The mixer class also handles booleans for gate signals (I just have an AND function right now); and multiplying signals together for scaling the ADSR output with the pan waveform.

The last real change is to addBuffer(), which is starting to look like it contains the output from a code generator. This is an intermediary step in the process of turning everything into “pointers to functions”.

private void addBuffer() {
ByteBuffer    byteBuffer;
ShortBuffer   shortBuffer;
byteBuffer  = ByteBuffer.wrap(audioBuffer[audioBufferPtr]);
shortBuffer = byteBuffer.asShortBuffer();

// Temp variable to store calculation results
double [] hold    = new double[AUDIO_SLICE_SHORT];

osc1.pitchBend = keyBoard.pitchWheelOut();
// Harmonic oscillator 3 relative to oscillator 1’s frequency
osc3.setFreq(bender1.out(osc1.getFreq()));

for(int pCnt = 0; pCnt < AUDIO_SLICE_SHORT; pCnt++){
osc2.nextSlice();  // Increment oscillator 2

mix1.in(0, keyBoard.gateOut());
mix1.in(1, osc2.gateOut());
adsr1.gateIn(mix1.outAndBool());  // Turn on ADSR via the keyboard or an oscillator

noise1.gateIn(adsr1.attackEvent());

mix2.in(0, osc1.nextSlice());
mix2.in(1, noise1.addNoise());
mix2.in(2, echo1.out());

hold[pCnt] = adsr1.nextSlice(mix2.out());
echo1.in(hold[pCnt]);
pan1.buildPan(pCnt);
vcf1.dataIn(pCnt, new Complex(hold[pCnt], 0));  // Add data to filter buffer
}

hold = vcf1.applyFilter(hold); // Do the actual FFT filtering
}

Raw formatted textfile of full app here.

 

Java Synthesizer, Part 16 – pan


As mentioned in the last article, I still had some cleaning up to do in the sound engine. Along with clicking that would crop up in the waveform at random, there’s a break up of the waveform right when the app starts running, and the engine only supports 1 channel (mono sound). The conversion to stereo wasn’t that difficult, it’s just that I wasn’t paying enough attention and I introduced a bug that took a couple hours to track down.  I also wanted to softcode the various audio parameters (mainly sampling time (10 ms or 20 ms) and the slice sizes (320 or 640 bytes).

The sound engine really didn’t need to change much. The one issue I still haven’t been able to resolve is the sound breaking up right after the app starts. I’m thinking its because of an uninitialized section of the echo buffer array, but it’s intermittent. I can’t guarantee that the problem will happen the exact same way each time, but it is exasperated by having the echo object in the loop.  The easiest work around is to just let the app sit for 20 seconds before trying to play music. I know it’s not an acceptable solution in the long term, but it’ll have to do until I can figure out what’s wrong. On the other hand, I discovered that the double buffering wasn’t starting out staggered properly, and by fixing that I caused a lot of the other random clicking (caused by Windows doing stuff in the background) to go away.

Converting to stereo is really just two steps: changing the number of channels in the audio format object, and doubling the size of the audio double buffers. The sound engine receives twice the data in one second for 2 channels compared to 1, so the buffers holding the waveform data need to increase, but otherwise it has no impact on the other modules. However, the addBuffer method needs to put data into the shortBuffer so that the right and left channels alternate in sequence. And this brings us to panning.

Once stereo is functioning correctly, adding a panning module is easy. However, it has to work in two steps. The first step is building up the volume multipliers for the right and left channels (usually, the 2 channels are 180 degrees out of phase), which can be done in the addBuffer for-loop. The second step is to wait until the VCF is done with the Fourier transforms and then apply the pan data to the filtered waveform data. This second step means that the pan data needs to be stored in an array, as does the waveform data coming out of the ADSR. Then the pan array can be applied to the filtered waveform array and directly .put() into the shortBuffer for transfer to the sound engine.

I’ll sidetrack a bit here. My goal is to have a way to build up synth circuits on the fly instead of hardcoding them, which I’d then be able to save as patches to disk. This means that whatever goes into the addBuffer() method for-loop in building up the waveform data has to be abstracted enough to allow me to just use the equivalent of pointers to objects and object method calls. I’ve been able to do that mostly with my circuit object, and the implementation of the module framework. Mostly.

There are 3 modules that don’t play well with each other. They are:

keyboard, which gets its data from the A-300 via a timer call.
VCF, the Voltage-Controlled filter, which operates on all 320 samples at once.
pan, which operates on the output from the VCF, just before the sound engine.

Essentially, keyboard, VCF and pan are attached directly to the synth inputs or outputs and need to be hardcoded as system functions. This means that while the user can set the filter frequency or the pan rate, they can’t choose to not have them in the overall design.  On the other hand, these three circuits may be bypassed or disabled if they’re not wanted. This is just a programming decision, and I may change my mind about hardcoding VCF and pan in the future.

Back to pan. The pan class is really just two 640 byte-long buffers, one for each speaker, plus an osc and VCA objects. The osc lets me choose between my various waveforms (if I deactivate pan, I just output 1.0’s for both channels), and the VCA offsets the osc output to run between 0.0 and 1.0. A balanced signal oscillates above and below 0.5. The osc output goes to the channel buffers.  Then, in the second step, the buffers act as multipliers for the ADSR output data as it’s being .put() into the shortBuffer. The left channel multiplier is just 1 – the right channel.

———–

The addBuffer() algorithm this time around is:

Read keyboard pitchwheel to osc1
For 0 to 319
Increment osc2 counter
ADSR gate is set to keyboard gate or osc2 output
noise1 gate triggered by ADSR attack event
noise1 is fed to vca3
The output of osc1 is the input to the ADSR
The output of the ADSR, vca3 and echo1 are averaged and input to vca1
The output of vca1 is stored to the hold buffer and written to the echo and filter buffers
Calculate the pan multiplier and store to the pan buffer
// End of for-loop

Apply the FFT filter and write to the hold buffer
Multiply the hold buffer data against the pan buffer and send to the sound engine

Raw formatted textfile here.

 

50 Famous People – Momofuku Ando


I guess it shouldn’t be surprising that Asahi Shimbun Publishing would shake up their 50 Famous People list a little. After all, they printed all fifty names in volume 1, and we’re now up to #41, 10 months later – there’s got to be some re-think along the way. Actually, the Martin Luther King, Jr. issue, which was supposed to be #41, came out as #21, which was supposed to be the Charlie Chaplin issue. Momofuku Ando moved from #46 to #41, and the new #46 is now slated to be Alfred Nobel. There’s no explanation for the changes, but I’m guessing that there were copyright issues for the photos Asahi wanted to use (or permission to profit off of Chaplin’s likeness.)

(All rights belong to their owners. Images used here for review purposes only.)

Momofuku Ando is the inventor of cup noodles, and founder of the Nisshin Foods conglomerate. The wiki entry is completely at odds with the biography details presented in the mook manga. The manga makes no mention of the deaths of his parents, or his belonging to a family-run textiles firm in Taiwan. According to the wiki, Ando moved to Osaka, gave loans to the students there, and was caught up in tax evasion charges, serving a 2-year sentence as a result. The article also states that he was making money selling salt at the time he began working on noodle production (supposedly in reaction to the Japanese government’s decision to focus on bread sales during the years immediately following WW II). Granted, the mook is aimed at children, and wiki has a reputation for containing more “opinion” than facts. But, the discrepancies between the two this time are huge.

The intro manga starts with Yoichi’s and Mami’s mother finishing up work on the family photo album. She gets ready to make dinner, only to discover that since Yoichi and Merrino were hungry, Mami and Mohea had destroyed ALL of the food in the house in an attempt to make some kind of a snack. Mom then resorts to “that” – her stock of cup noodle packages. Unfortunately, the kids like the cup ramen more than her regular cooking and she goes into a blue funk at the end.

The main manga is by Wataru Ofuji (Mini Pato! and Archeologic). The artwork’s not bad, but again, the faces have been westernized excessively, and Ando’s nose is maybe half the size shown in his photos. Anyway… The first page announces that cup noodle is one of the best known food products world-wide, having sold hundreds of millions of packages since its first appearance in the early ’70s. So, who created it? The story flashes back to the end of WW II, when Ando was a returning soldier. He notices that people have already started opening up small shops to revive a cash-based society. The city also has small cart vendors, and one particular cart has a very long line. Wondering why so many people would wait out in the cold like this, he discovers that they’re eating hot ramen (according to a sidebar, ramen was actually a Japanese dish, and didn’t come from China like is commonly believed). At the time, he’s struck by how happy the destitute people are while eating warm food.  A few years go by and Ando is approached by someone representing a “neighborhood trade association”, who proposes to use Ando’s name and reputation while handling the actual money management for a group of merchants. The association folds and Ando is suddenly bankrupt. His wife tells him to not worry about having lost everything except their house and a small table. She serves dinner to him and their young son and daughter, and again Ando notices how happy they look while eating. So, he goes out to gather supplies to build a small research kitchen in a shed next to their garden and begins work on making his own easy-serve noodles. He places a list of 5 requirements on the wall as a incentive – delicious, easy to prepare, long shelf-life, can be enjoyed anywhere, inexpensive.

Over the next few months, he discovers how hard it is to hand-make soba noodles, and his family rejects the results when he taste tests it on them. Eventually he improves, and reaches stage 1 – tastes good. But the noodles go bad too quickly and he hits a wall until he sees his wife making tempura. The hot oil seals the surface of the tempura batter while creating little holes for water to enter through, and this leads him to experiment with flash frying. In 1958, a year after building the kitchen, Ando brings his dried “chicken ramen” noodles to market. Instructions include putting the noodles in a bowl, placing a raw egg on top, adding boiling water and waiting 3 minutes. The mook claims that the noodles are an immediate big hit, while the wiki article says that at 35 yen a package, they were resisted as unnecessary luxury goods. Ando creates Nisshin foods to meet demand for the product. In 1966, he prepares to market the noodles world-wide, and is told that people want something that comes in its own bowl. This results in Ando’s development of “cup ramen” in 1971. The mook skips ahead to January 17, 1995 and the Great Hanshin earthquake in the Kyoto, Osaka and Kobe region. Being located in Osaka, Ando is quick to dispatch food trucks throughout the disaster areas, where he again gets to see people in need enjoying warm food in the middle of the cold and snow.  The story closes with his motto “Peace will come to the world when the people have enough to eat.”

The textbook section does get into more detail regarding Ando’s childhood. Born in Japan-controlled Taiwan roughly 100 years ago, his parents died early and he was raised by his grandparents. His grandfather had a textiles shop, and Momofuku liked spending time there. At age 14, he graduated from what was called an elementary school at the time, and started working in the shop himself, selling wool products. At age 22 he graduated from university and planned to focus on business. But, the war broke out and when it ended he moved to Osaka and got involved with the neighborhood association. (The thing about giving out student loans isn’t mentioned outside of the main manga, but apparently it was done to avoid paying taxes on the money.) After going bankrupt, he did make money producing salt, which helped finance his research kitchen. The government was promoting bread sales using U.S. wheat, and Ando felt that the Japanese would be happier eating their familiar noodles, which was one of the incentives for his research. After perfecting his flash fried chicken noodles, sales were slow because the price was higher than what people were paying for similar products for home and restaurant use. Eventually, though, it did catch on.  In the 60’s, while researching the American market, he saw someone put the chicken noodles in a cup and eat them while working. This inspired Ando to develop cup ramen. One of the sidebar articles describes the innovations for cup ramen: the ingredients other than noodles are anything that freeze-dries well; the cup was styrofoam (now paper) that protected people from the boiling water inside; the lid was an aluminum-on-paper product inspired by the packaging for macadamia nuts served on airplanes; and the noodles are cooked upsidedown, making it easier to slide the cup on from the top, and then turned over.  The last 2 pages are dedicated to the history of preserved Japanese foods, from smoked fish (2000 years ago) to sushi (raw fish was originally packed in discarded cooked rice as a preservative, 200 years ago). The idea of putting a fried pork cutlet on top of curry rice came from a Tokyo Giants baseball player 60 years ago, who was hungry and short on time before a big game. He ran into a restaurant, yelled out for a cutlet and a plate of curry, and he liked the result so much that he continued ordering it when he returned to the shop, so the restaurant put it on the regular menu.

TCG cards this time are for: Tchaikovsky, Cezanne, Liliuokalani, Renoir, Sir Henry Stanley, Auguste Rodan, Thomas Edison, Wilhelm Rontgen and Nietzsche.

Overall, there’s enough misdirection in this manga as to call into question its accuracy as a representation of Momofuku Ando as a person. Still, it is interesting to learn about the creator of something that is so commonly accepted in households worldwide. Recommended.

Java Synthesizer, Part 15 – Glide Redux


I’ve got just about all of the synth modules done that I want. Ignoring the GUI side, my main interests are in cleaning up the sound engine, removing the clicks when changing frequency if at all possible, and adding pan. Pan is a control that over time weights the sound coming out of one speaker higher than the other(s) (more than one in the case of quad systems, etc. For the laptop, there’s just the two speakers.) The sound data line can be defined as mono or stereo, and right now it’s just mono. If I go to stereo, I need to make the byte buffers twice as long, and that means messing with the sound engine. So, I might as well address pan at the same time I clean up the engine itself.

I wrote about trying to clean up the clicks in the sound waveforms before, but I decided to take a second stab at it. Any discontinuities in the waveform, or any changes that are too rapid, will result in clicks, even if the sample is only 16us long. If you are doing something like:

while(! done) {
waveform = Math.sin(cnt * (2 * Math.PI * freq) / sampleRate);
if(userChangedFreq) {
freq = newFreq;
userChangedFreq = false;
}
cnt++;
}

Then you have no way of knowing where in the old cycle you are changing from or where in the new cycle you’re changing to. Meaning you’re going to have a discontinuity, and that’s where the disproportionately loud click comes in.

One option is to look for where the sinewave crosses 0 from negative to positive and then change frequencies there.

if(userChangedFreq) {
if((Math.sin((c1) * 2 * Math.PI * freq / sampleRate) < 0) &&
(Math.sin((c1+1) * 2 * Math.PI * freq / sampleRate) > 0) {
freq = newFreq;
cnt = 0;
userChangedFreq – false;
}
}

There’s still clicking, but it’s not as strong because the discontinuity has been removed. It’s still there, though, because the waveform is “bent” during its upward slope and that distortion in the waveform is being picked up somewhere along the pipeline between the mixer and the speakers.

If we look at the sinewave, the rising slope has the greatest delta change right where the line crosses the x axis. Therefore, a second step that can be taken to reduce the impact of changing frequencies is to wait until the delta change is smallest – when the line is at its peak. This is also the point when the cosine wave is crossing 0 from positive to negative. Then, we would have to reset the cnt counter to a value that translates to 90 degrees for the sine part. The number of samples per cycle = samples per second / frequency. For a 200 hz signal, samples per cycle = 16000 / 200 = 80. The 0 degree point is at cnt = 0; the 180 degree point is 80 / 2, or cnt = 40; and the 90 degree point is cnt = 20.

if(userChangedFreq) {
if((Math.cos(c1 * 2 * Math.PI * freq / sampleRate) > 0) &&
(Math.cos((c1+1) * 2 * Math.PI * freq / sampleRate) < 0) {
freq = newFreq;
cnt = (16000 / freq) / 4;
userChangedFreq – false;
}
}

Well, there’s still clicking if you’re changing frequencies in larger than 10 hz steps, but it’s almost unnoticeable at 5 hz increments for low frequencies. If we look at the waveform now, we can see that the effect of changing frequency has been lessened, but there’s still a flattened area near the peak. This can be further minimized by changing the transition check from when the line is about to cross 0, to the point where it’s just finished crossing zero:

if(userChangedFreq) {
if((Math.cos((c1 – 1) * 2 * Math.PI * freq / sampleRate) > 0) &&
(Math.cos(c1 * 2 * Math.PI * freq / sampleRate) < 0) {
freq = newFreq;
cnt = (16000 / freq) / 4;
userChangedFreq – false;
}
}

Looking at the curve now, the transition has become almost seamless. The clicking has virtually disappeared for frequencies under 300 HZ, and has been greatly softened even up around 1 KHz, where the seam has more of an impact. This is as good as I’m going to get at the current (16,000) sample rate.

Now, once we have the 0-crossing test in place, it’s a simple matter to count the number of crossings, and then add variables for max. crossings; steps between old frequency and new; and max. steps. Add max. crossings and max. steps to the oscillator class strVal() and setStr() methods, and you now have glide.

Glide is definitely undesirable in gate arpeggiator and LFO oscillators, because it causes a “stutter” effect when you’re doing things like changing the frequency for gating the ADSR, or driving a second oscillator for frequency sweeps. Therefore, I added an enableGlide boolean parameter to the oscillator constructor. However, even with glide disabled, I still need to change frequencies as described above, so I just set the glide counters to glide max. to trick the nextSlice() for-loops.

Over all, the code for the osc class is getting ugly.  I may not be doing everything in an optimal way. But, the resulting sound from the synth is getting pretty good.  Next is to clean up the engine and add panning. From there, I’ll tackle the GUI again to abstract the module controls further to give the user the option of designing their own module wiring without having to hardcode it within the Java app.

Raw formatted textfile.

Java Synthesizer, Part 14 – Mixers


Ok, I sat down and spent half a day messing with the Mixer class.  Big disappointment.

If we go back to Dick Baldwin’s tutorials, the one on specifying a mixer does a pretty good job at laying out what we need to explicitly specify a mixer for use in the sound engine. Granted, his example uses a TargetDataLine for capturing data from a microphone and I need a SourceDataLine for sending to the speakers, but the premise is the same. The issue is just one of which tweaks are needed to change the code over.

Looking at the setup for the engine, the old code was:

private void startSend2SpeakersListener() {

try {

InputStream baStream       = new ByteArrayInputStream(audioData);
audioFormat                = new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
audioInputStream           = new AudioInputStream(baStream, audioFormat, audioData.length/audioFormat.getFrameSize());
DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, audioFormat);
speakerLine                = (SourceDataLine) AudioSystem.getLine(dataLineInfo);
speakerLine.open(audioFormat);
speakerLine.start();
new sendToSpeakers().start();                                           // Start listener
}
catch(Exception ex) {
jTextArea1.append(“startPlayDataListener Exception: \n” + ex + “\n”);
}
}

And the new code is:

private void startSend2SpeakersListener() {
try {
Mixer.Info[] mixerInfo     = AudioSystem.getMixerInfo();                    // Get the list of all Mixers on the system.
Mixer        mixer         = AudioSystem.getMixer(mixerInfo[0]);            // On my system, only mixers 0 and 1 work, out of 6.
InputStream baStream       = new ByteArrayInputStream(audioData);
audioFormat                = new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
audioInputStream           = new AudioInputStream(baStream, audioFormat, audioData.length/audioFormat.getFrameSize());
DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, audioFormat);

speakerLine                = (SourceDataLine) mixer.getLine(dataLineInfo);  // Explicitly call mixer 0.
speakerLine.open(audioFormat);                                              // Open the source line using mixer 0
speakerLine.start();                                                        // Start the source line for receiving data
new sendToSpeakers().start();                                               // Start listener
}
catch(Exception ex) {
jTextArea1.append(“startPlayDataListener Exception: \n” + ex + “\n”);
}
}

First, I make a call to the AudioSystem to get the mixer info object. I have 6 mixers that show up when printing the names out from the array. 3 are targets (connections to the mike jack and the sound card input line), and one is some weird source driver that doesn’t support the audio format I’ve selected. Only mixers 0 and 1 work with my app, and they behave pretty much the same. I’m disguising it, but there are actually 2 source lines per source mixer – one dedicated to SourceDataLine objects, and one for SourceClip objects. If I was writing an MP3 player, I’d be using the SourceClip.class for obtaining the data line.  However, since I am using DataLine.Info, it’s making the choice for me as to how to grab the correct Line object, and the specification of SourceDataLine.class tells the constructor which of the two types of lines to get for me. (You can look at my test program data dump for the actual output.)

Next, I assign the speakerLine object from mixer.getLine() instead of AudioSystem.getLine(). Otherwise, the two versions are very similar. The reason for going through the extra effort is to get at the line controls. According to Baldwin, different mixers have controls for things like volume, balance, pan and reverb. I wanted to see if there was one for waveform smoothing, or whatever is introducing the clicks when the frequency changes too fast. And I wanted to see if changing mixers would make a difference. The way to get to the controls is by getting the mixer line, make sure it’s not for SourceClip, open it, and use line.getControls. When you’re done, be sure to close the line.

mixInfo = mixer.getSourceLineInfo();
Line line = mixer.getLine(mixInfo[i]);
if(! line.toString().toLowerCase().contains(“clip”)) {
line.open();
}
jTextArea1.append(”      Line name: ” + line + “\n”);
control = line.getControls();
jTextArea1.append(”         Number of Line Controls: ” + control.length + “\n”);
for(int j=0;  j <control .length; j++) {
jTextArea1.append(”         ” + (j + 1) + “: Control” + control[j].toString() + “\n”);
}
if(line.isOpen()) line.close();

To use the control –

if (line.isControlSupported(FloatControl.Type.MASTER_GAIN)) {
FloatControl volume = (FloatControl) dataLine.getControl(FloatControl.Type.MASTER_GAIN);
volume.setValue(6.0F);
}

And now we start really running into the limitations. On my PC, mixer 0 line 0 only supports controls for volume, master gain, balance and mute. However, I can’t access line 0 directly, I have to use the call to the dataline with (SourceDataLine) mixer.getLine(dataLineInfo); because Java demands to abstract it for me, and this strips out the controls for volume and balance, making only master gain and mute available to the line.getControls(); method. And, Java’s enum list for FloatControl.Type. doesn’t include a value for mute.  So, between Java and the  sound drivers on my PC, the only mixer control I can access is master gain.  Which has nothing to do with solving my problem of clicking in the waveform.

I would say that the entire exercise of trying to explicitly call a mixer was a waste of time and then rip all the new code out, except that it’s only 3 lines and I may buy a new PC in the future with a better sound card. Plus, the extra 3 lines of code have no effect on the behavior of the app, so I’ll leave them in place for now, and keep the old code as backup just in case.  Regardless, I’m running out of options for getting rid of that stupid clicking…

Raw formatted textfile.
Test Program Data Dump.

 

Java Synthesizer, Part 13 – Echo


Echo, or reverb, is probably one of the simplest functions to implement so far. I don’t have anything else to pair up with it, so this entry may prove to be the shortest of the group.

Initially, I wanted a buffer array at 64K elements, but Netbeans threw errors when I used a range of 0 to 64,000, so I dropped it down to 32K within the circuit object definition. That’s still 2 seconds of loop time, with a 16,000 sampling rate. So, I’m not going to bother improving the code until the need arises later. Everything else is straightforward. One pointer keeps track of where the next sample will be written in the array, and a second points to where to read from. Reading or writing increments that pointer. The jSlider controls max loop, delay time and decay rate.

The write pointer goes from 0 to 31,999, which gives 2 seconds before the next set of data starts overwriting the old sounds. By changing loop max, I can get the older sounds mixed in with the new ones in less time. The echo playback effect is based on how far “read” lags behind “write”. So, delay time is the number of elements the read pointer is offset from the write pointer, with error checking to ensure that delay time is less than loop max. Finally, when I mix the new data with the old, the decay rate sets how quickly the sounds go to zero. Actually, the rate is just a multiplier, where:

buffer[writePtr] = decay * buffer[writePtr] + newData

Just to keep things simple, the data going into the echo buffer is the finished output from vca1, which then goes to the VCF module if an FFT filter is to be used. The best echo effect is with delayVal around 2000, and decay around 0.25, but the sheer fact that echo works at all is really cool.

echo echo1 = new echo(32000, 10000, 0.75); // int loopVal, int delayVal, double decay

addBuffer for-loop:

for(int pCnt = 0; pCnt < 320; pCnt++){ // Build up the sample
osc2.nextSlice(); // Increment oscillator 2
osc3.setFreq(bender1.out(osc1.freq)); // Pitch bend oscillator 3 relative to oscillator 1’s frequency

hold = (vca1.out( adsr1.nextSlice((osc1.nextSlice() + osc3.nextSlice()) / 2 )) + echo1.readOldEcho())/2; // Mix osc 1 & 2, run through ADSR and amplify with VCA 1
echo1.writeNewEcho(hold);

if(! vcf1.filterOn) shortBuffer.put( (short)( hold ) ); // Prep sound engine buffer if not using filter
else vcf1.addToChunk(pCnt, new Complex(hold, 0)); // Using filter. Add data to filter buffer

}

I don’t know what circuit symbol to use for echo. Suggestions?

Raw formatted textfile here.