K-Gater, ver. 3.3 – K-Pro as an effects processor


The Kaossilator Pro has about 15 vocoder voices, which makes it suitable as an effects processor, assuming that you have some kind of interesting input signal. Obviously, the intended signal is the human voice via a microphone, but that’s not the only choice.

In order to adapt K-Gater to control the K-Pro for this, I needed to assign 2 more controls and 3 of the drum pads. My intent was to keep using the 8 preset buttons for retaining K-Pro instrument selections, and then assigning them to at least a few of the vocoder instruments (U185 to U199). Then, all I’d need to do is toggle the effects on and off, change voices, and hand-control the x and y axes of the K-Pro.  The x- and y-axis settings were easy; I just allocated dials R3 and R4, and stored the values to 2 new variables.  I created a new class, KProSettings, for storing all of the K-Pro variables I had up to that point plus the new ones. And I also added entryStatus to let me deactivate the effect, activate it, or select instruments without actually changing the voice (for those times when you want to go from U185 to U190 without playing the voices inbetween). Pad A1 is the toggle switch, A2 increments one voice for the last K-Pro preset switch that had been used, and A3 decrements it.

For the most part, the code doesn’t change much, and there’s no change to the UI.  The results, though, are very nice, especially when I use the dual arpeggiators in free running mode. One fun side effect is that the vocoders often disguise the actual notes being played, so that you can’t really hear a pitch change when you change notes. Instead, there’s a major impact on the vocoded output, even if you’re just going to the next adjacent note.  What this means is there’s an almost infinite combination of sounds possible, and you need to take good notes to be able to recreate the same sounds from one day to the next. It’s almost like having a real synthesizer.

Youtube video


Formatted Textfile Here.

 

K-Gater 3.2 – adding a second arp


No entry for the 50 Famous People this week.  Next one will be next week.

———————————————————————

I like synthesizers.I like the ability to manipulate and warp sound into strange, new shapes. Unfortunately, I don’t have the money or space to buy anything new, and adding a sequencer functionality to K-Gater isn’t quite the same thing (that and I’m still thinking about the best approach to take). I am still thinking about writing a simple ADSR synth in Java, but it’s not high on my priority list yet.  Recently, though, I started wondering what would happen if I added a second arpeggiator function to K-Gater.

If you arpeggiate a piano sound fast enough, it stops sounding like a piano. With a simple arp pattern where you’re just turning one note on and off (0 0 0 0) you get a kind of drum machine stutter effect.  So, adding a second arp would modulate the first one. Technically, this might be like having a really long pattern that emulates a modulated pattern  (0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2, etc.), but you wouldn’t be able to vary the modulation in real time this way.  It’s a simple enough effect to add, although it requires the addition of 3 to 4 more control knobs, and the duplication of the rate, ratio, pattern select and step size components on the laptop screen.  Instead, I decided to double up the use of the existing Java screen components and just flag whether they’re displaying/modifying the settings of Arps 1 or 2. Fortunately, I still have 5 free dials on the A-300 Pro, and after some further thought, I realized I only needed 3 of them.

The real difference between arp 1 and 2 is that arp 2 doesn’t need “ratio”. The purpose of arp 1 is to turn the note on and off. But if arp 2 is modulating arp 1, then there’s no actual turning off of the first pattern. That is, arp 2 has a permanent 100% ratio.

I was also wondering how to approach the modulating waveform, if I should settle for just a square or a triangle wave, or use one of the free controls for selecting from a sawtooth and a sinewave as well. But then the realization that I was going to use modulation with a 100% ratio meant that a square wave would just be a really slow 0 1 pattern, and I could change the height of the squarewave by adjusting the arp 2 step size (which was already in the plans, anyway). So a sawtooth would just be a 0 1 2 3 pattern and a reverse sawtooth would be 3 2 1 0.  In other words, I could greatly simplify the controls, but it would still mean rewriting the Java code for the arp 1 rate, pattern select and step size components to cope with MIDI messages coming in from the two sets of A-300 dials and sliders.  In fact, the rewrite itself was fairly trivial, and just needed additions to parseMidi() and timer(), along with the instantiation of a second gateArp object.

The overall effect of having a modulating arpeggiator isn’t quite as awesome as I’d imagined, and there’s no justification to taking the next step and adding a third arp.  But, it does completely destroy the original piano sound and rearrange it as something new, which was the entire point.  Having two arps doesn’t work well with all of the default software synthesizer voices – the best effects seem to be with piano, square wave, and tympani.  And, if I add the Gakken SK-150 Mark II for filtering, the final output is kind of amusing.

And I was just about to leave K-Gater at this point when a thought surfaced from the back of my mind – “What if the arps were free running?” I’d copied the rate timing from the Kaossilator Pro, which is fixed at 4 seconds, 2 seconds, 1 second, 3/4 sec., 1/2 sec., 1/4 sec., etc. This is useful if you’re making music based on normal timing, but it is limiting and doesn’t allow for generating a range of beat frequencies.  So, I repurposed drum pad A5 on the A-300 to turn it into a toggle switch between fixed and free rates.  The next step required a complete gutting of the arp code because the rate slider was hardcoded to only select between the 11 fixed rate values, and now I wanted to run from 0 to 127. When I hit A5, I program the rate slider to either go from 0 to 10, or 0 to 127, and either pull the fixed rate from the timing array (which I moved out from the jSlider2 action listener) or calculate it as 1024 divided by the jSlider2 select value (giving a rate value of 1 second down to 8ms). I eventually realized that I needed a map() method, also.

When I was playing with the Japanino microcontroller (AKA: the Arduino), one of my favorite C++ functions was map(), which easily converts values between ranges. That is, if a jSlider selects from 0 to 127, and I want to use this to switch between my 8 Kaossilator voice preset buttons, I can do something like:

button = map(jSlider1.getSelectedValue(), 0, 127, 0, 7);

where “0, 127” is the range of the slider and “0, 7” is the new range I want to convert to. If the slider is at 40, then I’ll be activating preset button2.  So, once I had map(), I then went back through the entire Java app and replaced the old code where possible.

I like the effect of having dual free-running arps.  Very noisy.

Youtube demo video

Full formatted Java code.

 

K-Gater Ver. 3.1


I know that I said I was going to put bug fixing on hold, but I really wanted to make a video of everything running together and I wanted to make sure that the program wouldn’t crash during recording.  Along the way, I remembered the mention somewhere of something called “aftertouch”.  Then, while trolling youtube for videos about the A-300 Pro, I saw someone actually using it and decided that I’d like to try implementing aftertouch into K-Gater with the software synth (the K-Pro doesn’t support it).  So, I got into debug mode while seeing about adding the new function.

The first thing I discovered (although it took too long to catch on) is that not all MIDI messages are 3 bytes long.  Most of them are, but Channel Pressure is only 2 bytes, and my MIDI message print function was throwing an index out of bounds exception until I figured that out.  There’s a nice document on the net that describes both Channel Pressure and Aftertouch. Problem is, the description of Channel Pressure says that there’s just the status byte and one data byte.  For the A-300 Pro for Channel Pressure, it’s the status byte (0xD0 to 0xDF) to indicate channel number, and two message bytes. Byte[0] relates to the channel number, and byte[1] is the pressure value (0-127).

Aftertouch is per note and Channel Pressure is averaged across all pressed keys for the channel.  While holding the key down, varying pressure on the key causes the software synthesizer to create a vibrato effect (other external hardware can implement it differently).  According to the above-mentioned description, most synthesizers only offer one or the other options, with cheaper keyboards using Channel Pressure.  However, some keyboards do have both.  The A-300 Pro only has Channel Pressure.  If you look at the Roland product description, they say it has “channel aftertouch”. This is misleading, because it really means “channel pressure” and NOT per-key aftertouch.

Anyway, to implement the function in Java, the code in parseMidi() looks like:

if(pNo == 0) {  // A-300 Port 1
if(midiStatus >= 208 && midiStatus <= 223) {  // Channel Pressure
if(channelNo != midiPorts.kProChannelNo) {  // Only for software synth, not K-Pro
channels[channelNo].setChannelPressure(midiByte[1]);
}
}
}

Pretty simple.

But I was still in debug mode. And the specific bug I was trying to track down is the “stuck key” problem. If I hit too many keys randomly for a certain amount of time, one of the keys gets stuck in the “key pressed” ArrayList and continues to play after I release it. I mentioned in the last entry that it’s because two incoming MIDI messages can interfere with each other, and the parser fails to see one of them arriving.  I’m pretty sure the same thing holds for two MIDI_OFF messages arriving, but that’s harder to check. I hadn’t encountered this problem before adding the “add note” and “delete note” ArrayList buffers, so I’m pretty sure the A-300 was sending the messages correctly and that the software synth was seeing matching NOTE_ON/NOTE_OFF pairs regardless of how I banged on the keyboard.  Meaning that the midiParser() method was also working right. But somehow, I’m losing that one occasional NOTE_OFF message. The work around is to turn the note off anyway during the flush “remove note” buffer step. It probably won’t affect how the music sounds and that’s the important part.

I uploaded the video of the entire rig to youtube.

Full formatted Java listing.

 

K-Gater Ver. 3


I’m now up to K-Gater version 3. Version 1 was when I just started out and didn’t really understand the way the Kaossilator Pro behaves compared to the software synthesizer, and I didn ‘t have the Roland A-300 Pro midi controller keyboard. The code worked, but it wasn’t very efficient and it only sent data in one direction – out to one of the two players. With version two, I added the A-300, and moved some of the K-Pro receiver handling code out of the Change Instrument and Play Note methods. I also broke the initInstrument code in two, making the process of obtaining MidiDevice objects more modular. The next step was to figure out how to open transmitters to the A-300, set up the sequencer-sequence-track combinations, and then read incoming MIDI messages prior to interpreting them and passing the results to either the K-Pro or the software synth There were a few minor tweaks as well, such as making the metronome button flash red on the beat counts.

Version 3 was mostly intended to just add polyphony to the arpeggiator. It also ended up being a shake-out stage, where I had to do a lot of unexpected bug fixing. And I moved a few of the variables around to put them into a new class for recording NOTE_ON/NOTE_OFF events.

Originally, I just wanted to take MIDI NOTE_ON and NOTE_OFF messages from some keyboard device and convert them to CONTROL_CHANGE messages to send to the K-Pro. While I was at it, I figured I might as well add the arpeggiator ability. And that’s where a certain limitation crept in. I was only tracking one key at a time for ‘pegging, which sacrificed the polyphony (ability to play two or more notes at a time) supported by the A-300 and the software synth. Under most conditions, I expect that musicians won’t ‘peg chords, but it did make it difficult to smoothly transition from one note to another with the arp turned on. I had to release one key and wait a fraction of a second before pressing the next, and that produced a noticeable break in the ‘pegging.

The obvious solution was to convert the single variable to an ArrayList of key-NOTE_ON messages. To play the chords while ‘pegging or when changing preset voices, I just run through the array and send NOTE_ON for each key still being pressed. If a key is released, I remove that key from the ArrayList, and send a NOTE_OFF message to the instrument. This is fine in concept, but there’s a problem of losing notes when ‘pegging while changing instrument voices, so I turn all the notes off before changing the voice and then send new NOTE_ON messages afterward. Which brings me to the structure of the NOTE_ON message. We not only need to pass the number (0-127) of the note itself, but also the velocity (0-127). Velocity can either translate to “volume”, or it’s a measure of how hard the user hit the key on velocity-sensitive keyboards. If I’m sending NOTE_ON when ‘pegging or changing instrument voices, then I need to record the velocity with the note number. If I’m sending note off, I only need the note number, but now the velocity is associated with the note number within the keyPressed ArrayList and I need to mask it out when searching for the note number to be removed from that ArrayList. It’s not all that difficult to deal with, just kind of petty.

But, I found something else I didn’t quite like. When pressing a second or third key while ‘pegging, the notes get turned on or off in the middle of the arp note ON period and the result sounds sloppy. My solution was to buffer the NOTE_ON and NOTE_OFF events, storing them to their own ArrayLists, and then using removeBuffer() and addBuffer() methods to actually add and remove notes from the keyPressed ArrayList at the end of each arpeggiator time period. This results in a much smoother transition when ‘pegging from one key to another, but a new bug slipped in. Now, if you play a chord and both keys hit at the same time, one of the notes may get “stuck” in the buffer and the program will keep ‘pegging after you release all of the keys. The problem revolves around two NOTE_ON messages arriving at the PC at the exact same time and confusing the MIDI parsing process. So, I either don’t buffer and allow chord changes in the middle of an arp note on event; I buffer and try to avoid pressing 2 keys at the exact same time while ‘pegging; or I flush the keyPressed ArrayList when I turn the gate arp off (this is the work-around I settled on until I can fix the larger problem).

I haven’t decided what I’m going to do next. However, I’m spending more time on program debug than I am actually playing the keyboard. So I’m going to post the code now and come back to it again later when I have more time.

As I was going through the code, I realized also that the idea of a “base key” (jTextField2) for the arpeggiator was messing up my ability to change the keyboard bottom key and key spacing of the A-300 when driving the K-Pro. So I simplified the code and removed jTextField2. This gave me a little more room on the screen and I shortened the west, middle and east panels, and made the south panel taller. This gives me more room for displaying debug messages. Otherwise, the user interface is largely unchanged.

Formatted textfile.

 

Roland Part 5


That’s pretty much it for the process of intercepting MIDI data from an input device and then sending the interpreted data to the output device.  Now that I have the 3 sequencers set up to capture MIDI data to Track objects, it’s a minor step to add a real sequencer to my app.  Unfortunately, I’m a lousy musician, and I need a lot more practice before I’m ready to record anything.  So, sequencing will come later, but that will be easy compared to everything else that has led up to this point.

I’ll just make some comments now, based on what I’ve learned so far.

Regarding the K-Pro:

Controlling the K-Pro via CC messages is trivial. And using a keyboard with it really does make it sound better.  However, I’m noticing a very limited sound range for note # sent versus note played.  I’m sending 0-127 to the K-Pro as the X range, but 0 to about 50 all sound like the same lower note.  Then 90 to 127 sounds like the same upper note.  It’s 50 to 80 or 90 that gives me the touchpad’s full range of the sounds as far as I can tell.  This might be related to the key or scale selected on the K-Pro. I haven’t gotten that far in figuring out what’s going on.

Regarding the Software Synth:

If I send MIDI data directly to the synth, it’s pretty fast and reliable. But if I use something like Sonar I get a half-second delay from pressing the key to the sound being produced.  Or, if Windows is doing something like a software update or running Norton background scan, the notes get cut off or scrambled.  Also, I tried loading other soundbanks, and discovered that there aren’t as many files available for download as I’d thought there were.  I tried using the higher-quality Java soundbank (the 5-meg one), but it wouldn’t load on my laptop.  For the moment, it’s not an issue because I’m still just learning how to play with the piano sound.  Later, I may end up using Sonar more and then I’ll need to figure out how to reduce the latency issue.

Regarding keyboard controllers:

I started out by looking at the Avid, Apex, Korg and Yamaha controllers. (All images taken from the Amazon pages.)

Korg’s mini-controllers looked and felt too cheap.

The Avid Keystudio keyboard was a little too big for my use and doesn’t have much in the way of controls or touch pads.

(MPK-25)

(MPK Mini)

(LPK-25)

(LPD-8)

The Akai Pro MPK mini was better, but the user reviews indicated that the touch pads are unresponsive unless you mod them with tape.  Plus the keys felt too soft and mushy on the big unit, and too narrow on the Mini.  The LPK-25 was too limited on controls, and if I got the LPD-8 along with it for the extra pads, I’d run out of USB ports.

The Yamaha KX25 was a strong candidate. The keys felt similar to the A-300 Pro, and there are a lot of buttons, though it’s kind of lacking in terms of variable knobs.  It’s in the same price range and it does have the second variable wheel.

I was tempted to go with the Micro Korg synthesizer to get an actual, real synth, and swallow the fact that it’s almost $400. But again, the micro-size keyboard is too small for my fingers and I keep hitting two keys at a time.

While most of these products are less expensive than the Roland controller, they just don’t seem to be worth the money to me.  The A-300 Pro is the most expensive of the keyboard controller group, but it’s the only one that gives me enough room to grow into (plus it has more keys and sliders than the Yamaha).

Regarding the Roland A-300 Pro:

This thing just looks cool.  With all of the sliders, knobs and buttons, plus the orange display panel, the A-300 screams “I’m high tech!” I mapped all of the K-Gater controls over to the A-300, and it took up fully half of the buttons and knobs.  And that was after creating new functions just to be added to the A-300 because I could. Meaning that I still have room for adding more ideas later (like the sequencer option).

I’m not really happy with the velocity function, though. There are times when I press a key and nothing comes out. I think this is because the key only fires if it hits the bottom of its stroke. Pressing the key fast, but not going all the way down has no effect.  I don’t know if this is what professional musicians prefer, but to me it’s very jarring.  Maybe as I practice more, I’ll change my mind.

I like the fact that the A-300 can run off the USB cable without needing another power supply (like most controllers).  But I don’t know how much strain it’s putting on my laptop.  I may buy a separate adapter soon just for the peace of mind.

I spent a lot of time on the U.S. Korg website, and wading through the official Korg forums.  There’s a sense of it being laidback and relaxed, while also reaching out to their customers.  Not 100% perfect, but still nice. The U.S. Roland site, though, was designed by lawyers. “We don’t support anyone outside the U.S.”; “”We don’t support Sonar (even though it’s packaged with our product)”; “You can’t use our forums if you didn’t buy our product in the U.S.”, etc.  Very unfriendly.  What I dislike most is that even though I bought the keyboard in Japan, there’s no sales or support office here. So, not only can’t I get information from an official source in Japan, the U.S. site refuses to help me as well.  Then there’s the lack of additional documentation or knowledge base data for the A-300 on the U.S. site. It’s like Roland made the machine then decided there was no point in getting anyone to buy or use it.  Don’t get me wrong – I think the A-300 is a great product, I just dislike the support side.  I think that Korg is a much friendlier and more welcoming company.

Will I buy another Roland product at this point?  It depends.  I wanted a Korg keyboard to keep in with the Kaossilator Pro, but there’s nothing at the music shop near me in Japan that came close to what I was looking for.  The A-300 gives me the controls I want in a small package with a decent keyboard feel, and it’s very rugged.  But if I can’t count on Roland to answer technical questions, then no, I’ll keep looking for a friendly company.  I desperately want a synth like the MicroKorg, but I can’t stand the micro-style keys on it. The music store here doesn’t have a similar synth from Roland, and most other synths are either too big or too expensive (I have a very small apartment and a limited budget).  For the moment, I’ll save my money, and either try to reduce the latency on Sonar to make using it less painful, or I may try tackling writing my own envelop-control-style synthesizer in Java (I’d have started it already, but I’m afraid that the FFT transforms for frequency filtering may cause a big performance hit on sound generation).

I’ll make a brief comment on the Korg Monotron. This is a ribbon-controller synth, with very simple sound and controls. The primary use in a rig like mine is as a filter on the output from the K-Pro.  I saw a video on the Korg site demonstrating this application, and I started thinking that the Gakken SK-150 Mark II synth that I have (7000 yen, or $84 USD) might be used the same way.  The Monotron price has been slashed and some stores have it for $47, making it a reasonable choice for an effects box.  I do like the results I get with the Mark II and it does make the entire rig more fun to play with; I just wish it was a stereo unit.

That’s it for this series.  If any of this was useful, please give me a comment.  I’d love to work with other Java programmers in controlling external MIDI hardware, and would like to get to the point where we can build on each other’s experiences.

 

Roland Part 4


Ok, so we’ve run initMidiDevices(), which allowed us to obtain instances of MidiDevice(), and the receivers and transmitters for both the K-Pro and the A-300. We ran readSoundBank() to set up the software synthesizer. And setupA300() to init three software sequencers, one for each A-300 port, and to enable recording to three Tracks, one for each sequencer.

We’re finally ready to get some useful work done. One of the last things to happen in setupA300() was that I set kBoard.seqTmr = 0. I’ll ignore my kBoard object for now, except to say that it just has some of the variables I’d been using in K-Gater all along, such as keyboardBottomNote and KeyboardSpacing. If we look now at the master timer, we can see what seqTmr does;

class timerExec extends TimerTask {
public void run() {
if(met.tmr > -1) {
met.tmr++;
if(met.tmr >= met.bpmTotal) {
met.count();
} else if(met.tmr == met.flashCnt) {
met.resetButton();
}
}
if(arp.arpTmr > -1 && arp.keyPressed) {
arp.arpTmr++;
if(arp.arpTmr >= arp.arpTime) {
playArp(arp.calcNote);
}
}
if(kBoard.seqTmr > -1) {
kBoard.seqTmr++;
if(kBoard.seqTmr > 5) {
kBoard.seqTmr = 0;
readA300();
}
}
}
}

As before, I’m using it to run the metronome and the gate arpeggiator. Undoubtedly there’s a better way to do this, such as firing off action listeners, but this is good enough for hobbiest purposes. Now, every 5ms, I’m going to call the readA300() method. And here’s where the real secret lies in intercepting MIDI messages in real time. It’s kludgy and roundabout as all hell, but for the life of me I can’t find any other way to do this. As long as the sequencer is recording sequence data to the track object, track size is going to be “1” (meaning that we just have the required “end of track” marker). So, stop recording, and if the track size is greater than 1, we have new MIDI data. Since we’re checking so rapidly, the odds are that there will only be one message waiting, and if there are more, we’ll grab that on the next call to readA300(). Read the MIDI event to a temporary object, remove the event from the track, and restart recording. I also call my function parseMidi() with the port number (0-2) and the MIDI message. Remember to check both, or all three A-300 Pro MIDI IN ports.

private void readA300() {
for(int i=0; i<aProMaxPorts; i++) { seqcr[i].stopRecording(); if(track[i].size() > 1) {
MidiEvent me = track[i].get(0);
track[i].remove(me);
parseMidi(i, me);
}
seqcr[i].startRecording();
}
}

Now, one of the things that I will harp on about Java over and over is how stupid it is with regard to lists. If you have two identical items in a list, no matter what you do, removing an item will ALWAYS cause the first instance in the list to be deleted. If you have two lists that you’re trying to keep aligned. Java will break the alignment by removing something other than what you wanted. Fortunately, right now, we’re ok. MIDI NOTE_ON messages always include velocity data, and the odds of the user pressing the same note twice in 5 ms with EXACTLY the same velocity is very low. However, if we build up sequencer patterns later and we want to edit them, this will become more of an issue. The reason for mentioning this is that track.remove(MidiEvent) removes the FIRST matching event, so it’s important to keep the seqTmr max value low (5 ms or 10ms) to avoid getting more than 1 or 2 messages in the queue at any given moment. If the user does manage to somehow press and release the same key within that 5ms, we’d have 2 NOTE_OFF messages for it, and the first one would be deleted, which is what we want for a FIFO list, and again there’s no problems with this just yet. Ultimately, all that really matters is that we send NOTE_OFF messages the same number of times that we send NOTE_ON for each key.

The last really new step is to parse the MIDI messages.

As mentioned above, it would probably be better to trigger a listener in order to start a separate thread and to allow control to return directly back to the master timer. At the moment, the parser is fast enough that it isn’t noticeably messing up the timer. I could also use something like a switch statement instead of a whole bunch of else-ifs, but again, this is fine for hobbiest purposes.

The structure of a MIDI short message is “number”, “name or status”, “data 1” and “data 2”. Generally, for the purposes of K-Gater, the number and status bytes can be mostly ignored. I’m not concerned with the channel number because that’s being handled in my app as part of the instrument voice preset buttons. If I do want to change channels, it will be as part of a control message or a Java .doClick(). So, “channel 1 note on” is identical to “channel 8 note on” as far as I’m concerned. Instead, the contents of Data 1 is of more interest, because that tells me which control or button is being used, and Data 2 has the value of the control, 0 to 127. For the Roland A-300 keyboard controller, sliders and knobs go from 0 to 127. Buttons are “0” for Off and “127” for On.

I’ll just display the code for parsing port 1, which contains the NOTE_ON and NOTE_OFF data for the keyboard keys. You can look at the full app listing for the rest of the control parsing. And yes, I do care about the status byte for the NOTE_ON and NOTE_OFF events here.

private void parseMidi(int pNo, MidiEvent midiEvent) {
byte [] bbb = midiEvent.getMessage().getMessage();

if(pNo == 0) {
if(midiEvent.getMessage().getStatus() >= 128 && midiEvent.getMessage().getStatus() <= 143) { // Note off for channels 1-16
int h = bbb[1];
if(channelNo == midiPorts.kProChannelNo) {
h = clamp((bbb[1] – kBoard.bottomNote) * kBoard.spacing, 0, 127);
}
if(! arp.state) {
if(arp.state) {          // I just noticed this. I need to fix this
noteOff(arp.currentNote);
}
else {
noteOff(h);
}
}
arp.keyPressed = false;
}
else if(midiEvent.getMessage().getStatus() >= 144 && midiEvent.getMessage().getStatus() <= 159) { // Note on for channels 1-16
int h = bbb[1];
if(arp.makeNewPattern == 0) { // User wants to make new arp pattern using keyboard
arp.makeNewPatternBase = h;
arp.makeNewPatternString = “0 “;
jTextField6.setText(arp.makeNewPatternString);
arp.makeNewPattern = 1;
}
else if(arp.makeNewPattern == 1) { // Add each new key to pattern string
arp.makeNewPatternString += Integer.toString(h – arp.makeNewPatternBase) + ” “;
jTextField6.setText(arp.makeNewPatternString);
}
if(channelNo == midiPorts.kProChannelNo) {
h = clamp((bbb[1] – kBoard.bottomNote) * kBoard.spacing, 0, 127);
}
kBoard.key = h;
if(! arp.state) {
int v = clamp(bbb[2] * kBoard.volume / 64, 0, 127);
noteOn(h, v); }
kProY = bbb[2];
arp.keyPressed = true;
arp.calcNote = kBoard.bottomNote;
arp.patPtr = 0;
}
else if (midiEvent.getMessage().getStatus() >= 224 && midiEvent.getMessage().getStatus() <= 239) { // Pitchbend, all channels
if(channelNo != midiPorts.kProChannelNo) {
channels[channelNo].setPitchBend((bbb[2] << 7) + bbb[1]); // Can only pitch bend the software synth
}
} else {
if(showMidiCodes) jTextArea1.append(“Midi message from PRO 1: ” + midiEvent.getMessage().getStatus() + ” ” + bbb[0] + ” ” + bbb[1] + ” ” + bbb[2] + “\n”);
}
}
} // Add port 2 and MIDI IN port handling here

There’s a lot of extra stuff going on here. When I get a NOTE_OFF message, I have to worry about the gate arpeggiator, because the arp pattern may be changing the note being played, while the user could be holding down a completely different key. Right now, K-Gater supports polyphony when the arpeggiator is off, but becomes monophonic when the arp is on. This sounds jarring to me when I use the arpeggiator, and I need to add support code for keys being pressed and released as the arpeggiator does its thing.

I added a function called clamp(), which is used to ensure that a number is within upper and lower bounds.

In the section for NOTE_ON, I added the ability to create new arp patterns directly from the A-300. First, I need to look at whether recording has started, and then build up the pattern string to display in a text box. If recording has ended, add the new pattern to the arp pattern combo box. Otherwise, I use a different control to set the bottom note for the K-Pro and I need to ensure that bottom note + current key is < 128, so I use clamp() again. If the ‘pegger is off and if I’m connected to the K-Pro, I use the key velocity data for changing the K-Pro Y axis note. Otherwise, I combine the volume control with velocity as a pseudo-amplifer to alter the velocity sent to the software synth. I’ll also pass some arp settings to the ‘pegger, but the process of ‘pegging itself occurs in playArp().

If I’m driving the software synth, then I can pass pitch bend data from the pitch controller. This took a bit of screwing around to figure out. Bytes Data 1 and Data 2 only contain 7 bits of pitch bend information each, with Data 2 containing the upper byte. To convert to an int, I shift Data 2 seven bits to the left then add Data 1.

Finally, I use an else-statement to output any A-PRO 1 port messages that were otherwise missed. I process the controller sliders, knobs and switches in the section for port 2.

 

Roland Part 3


In the raw code for initMidiDevice() in the last blog entry, we had a little more code for setting up the K-Pro.  It’s nothing more than loading the instrument names into the combo box.  Otherwise, if we don’t have the K-Pro, deactivate the jButtons for the K-Pro voice presets.

if(kProMidiDevice != null) {
initkProInstrumentList();
} else {
jComboBox1.setEnabled(false); // Deactivate K-Pro related buttons if no K-Pro
for(int i=0; i<11; i++) {
jBList.get(i).setEnabled(false);
}
}

For contrast, I’ll include the code for reading the software synthesizer soundbank.  If we get the soundbank, then read the instruments (names, banks and ports) into an array, and load the instrument names into a combo box.

Synthesizer    synth              = null;
Soundbank       soundbank          = null;
MidiChannel []  channels           = null;
Instrument[]    aInstruments       = null;

private void readSoundBank()  {
try {
synth = MidiSystem.getSynthesizer();
synth.open();
soundbank = synth.getDefaultSoundbank();
synth.loadAllInstruments(soundbank);
channels = synth.getChannels();
} catch (Exception ex) {
JOptionPane.showMessageDialog(this, “Couldn’t Open Soundbank:\n” + ex, “Soundbank Open Error”, JOptionPane.PLAIN_MESSAGE);
}
if (soundbank == null){
jTextArea1.setText(“no soundbank”);
} else {
aInstruments = soundbank.getInstruments();
for (int i = 0; i < aInstruments.length; i++) {
jComboBox3.addItem(aInstruments[i].getName());
}
}
}

Now, things become much more complicated for the A-300.  As mentioned in Roland, Part 1, we can’t just connect the A-300 transmitter to the K-Pro receiver as shown in the Oracle documentation and expect everything to work perfectly.

Well, actually, we can make the connection to the software synth, but we can’t change instrument voices via the A-300 controls, and there’s a 2-second delay from when you press the key and when sound comes out.  In fact, the sound starts looping on us.

To make the connection from the A-300 transmitter to the software synthesizer all we have to do is:

Receiver synthRcvr = synth.getReceiver();
a300Xmtr[0].setReceiver(synthRcvr);

The one nice thing about this is that the “A-PRO 1” port has the note on/off MIDI messages for the keyboard, and the software synth can play them with the default piano voice right away (ignoring the latency and the looping).

We don’t really need to do anything special in order to change the instrument voice.  The A-300 does have the ability to select MIDI channels, so if we wanted to change the voice on channel 3 of the software synth, we do so via the channels objects, and then select channel 3 on the keyboard. It’s just that we can’t see if the user is already pressing a key and then wait to change the voice until after the user releases the key, and we can’t drive the K-Pro using the A-300.

voiceNum = 2; // Piano voice 3
channels[2].programChange(aInstruments[voiceNum].getPatch().getBank(), aInstruments[voiceNum].getPatch().getProgram());

The Oracle sound tutorial does show forking – the process of having one transmitter sending to two receivers. In the example, they have an unknown inport port talking to the software synth and a software sequencer. The nice thing about this example is that you could use it for saving the user’s keyboard strokes to the sequencer for later replay.  That, and it’s our only clue for how to trap incoming MIDI messages to interpret before passing on to the K-Pro.  Reusing the above direct connection code:

Sequencer    seq;
Receiver     seqRcvr   = seq.getReceiver();
Receiver     synthRcvr = synth.getReceiver();
Transmitter  a300Xmtr1, a300Xmtr2;

a300Xmtr1 = a300Xtmr[0].getTransmitter();
a300Xmtr2 = a300Xtmr[0].getTransmitter();
a300Xmtr1.setReceiver(synthRcvr);
a300Xmtr2.setReceiver(seqRcvr);

Granted, all of this has to be inside try-throw-catch brackets, but if you look at the Oracle example code, you can see how that works. The point to all of this is that if you look at the Transmitter methods, there’s nothing for handling MIDI messages coming from the A-300 keyboard.  But, there is sort of something embedded in the Sequencer. So, instead of using fork code, we drop the direct connect from the A-300 transmitter to the software synth receiver, and focus entirely on the connection to the software sequencer object.

I won’t bother discussing what’s needed to fully set up a sequencer.  Instead, I’ll just show my finished code (you can look at the Oracle Sound Tutorial if you want).  The important point to remember is that the A-300 keyboard controller has three MIDI IN ports (A-PRO 1, A-PRO 2 and A-PRO MIDI IN), and that most, if not all of the MIDI messages will be coming from A-PRO 1 and A-PRO 2.  For the sake of argument though, we will record sequence data to 3 different Track objects, one for each A-300 port (if we only use A-PRO 1 and A-PRO 2, then we only need 2 Track objects). If we already have the K-PRO and A-300 MidiDevice instances from initMidiDevices(), then:

MidiDevice []   a300Device         = new MidiDevice[3];
Transmitter []  a300Xmtr           = new Transmitter[3];
int             aProMaxPorts       = 3;
Sequencer []    seqcr              = new Sequencer[3];
Receiver  []    seqRcvr            = new Receiver[3];
Sequence  []    seq                = new Sequence[3];
Track     []    track              = new Track[3];

private void setupA300() {
try {
for(int i=0; i<aProMaxPorts; i++) {
if(a300Xmtr[i] != null) {
seqcr[i] = MidiSystem.getSequencer();
seqRcvr[i] = seqcr[i].getReceiver();
a300Xmtr[i].setReceiver(seqRcvr[i]);
seq[i] = new Sequence(Sequence.PPQ, 10);
track[i] = seq[i].createTrack();
seqcr[i].setSequence(seq[i]);
seqcr[i].open();
seqcr[i].recordEnable(track[i], -1);
}
}
if(a300Xmtr[0] != null) {
kBoard.seqTmr = 0;
arp.keyPressed = false;
}
} catch (Exception ex) {
JOptionPane.showMessageDialog(this, “Couldn’t Open A-300 Pro Sequencer Objects\n” + ex, “Sequencer Open Error”, JOptionPane.PLAIN_MESSAGE);
}
}

For either the first two ports, or all three, check if it’s null (not plugged in or turned on). If not, get a default software sequencer, get the receiver for it and assign the sequencer receiver to that A-300 port transmitter.  Make a new sequence (details don’t matter because we are intercepting MIDI messages in real-time), create one new track for it, and assign the sequence to the software sequencer. Open the sequencer and enable recording to that track for all 16 MIDI channels of the A-300 keyboard.  (If we were making a true sequencer system, we’d assign 16 tracks to the sequence, one for each channel).

In the last step, recording all 16 MIDI channels to one track, I’m saving myself work later on. I want to be able to parse MIDI messages from just one track and I don’t care what channel of the A-300 they came from because I’m using my instrument voice preset buttons for channel control.  If I did care, then I’d need an array of 16 Track objects and loop through them one at a time.

The remaining code just sets seqTmr to 0 and arp.keyPressed to false.  The purpose here is to include the sequencer in our master timer (to be explained next time) and to disable the arpeggiator, in case the user is holding a keyboard key down.

Roland Part 2


I decided to modify the K-Gater program in several ways as I began implementing the code for the Roland keyboard controller. First, I split up the code for opening the receiver to the K-Pro from the rest of the initialization so I could make it more modular. This allowed me to add checks for whether a device implements the Sequencer or Synthesizer classes. (Because none of the external hardware I’m using do either, I’m not doing anything special with this new code.) The purpose of getMidiDevice() then is to just receive the name of the desired port as a character string, and return an instance of the MidiDevice class if the port name is found (i.e. – if the device is turned on and plugged into the computer.) I use the testMode variable to determine how much data to print out to the jTextArea1 box (1 = print name, description, vendor, version. 2 = only print the name. 0 = don’t print anything).

initMidiDevices() is where I make the calls to MidiDevice() with the names of each of the desired ports. Note that while the A-300 Pro has three ports (A-PRO 1, A-PRO 2 and A-PRO MIDI IN), only the first two transmit controller data right now. So, I use the aProMaxPorts variable to determine how many ports to connect to. Only using the first two can help speed the program up a little bit later on. Otherwise, this section works pretty much like the original K-Gater code did when I just had the K-Pro.

New methods:

private MidiDevice getMidiDevice(String deviceName, int testMode)
private void initMidiDevices()

As a recap, once we have the K-Pro Midi Device instance, we want to get a receiver and then open it, as shown in the code snippet.

MidiDevice kProDevice = null;
Receiver kProRcvr = null;

kProDevice = getMidiDevice(“KAOSSILATOR PRO 1 SOUND”, 0);
if(kProDevice != null) {
kProRcvr = kProDevice.getReceiver();
}

The difference with the A-300 is that we want to get a transmitter instead of a receiver.

MidiDevice [] a300Device = new MidiDevice[3];
Transmitter [] a300Xmtr = new Transmitter[3];
int aProMaxPorts = 2;
String [] aProPortNames = {“A-PRO 1”, “A-PRO 2”, “A-PRO MIDI IN”};

a300Device[i] = getMidiDevice(aProPortNames[i], 0);
if(a300Device[i] != null) {
a300Xmtr[i] = a300Device[i].getTransmitter();
}

———————– Raw code.

MidiDevice kProDevice = null;
Receiver kProRcvr = null;

MidiDevice [] a300Device = new MidiDevice[3];
Transmitter [] a300Xmtr = new Transmitter[3];
int aProMaxPorts = 2;
String [] aProPortNames = {“A-PRO 1”, “A-PRO 2”, “A-PRO MIDI IN”};

private MidiDevice getMidiDevice(String deviceName, int testMode) {
MidiDevice ret = null;
int holdNo = -1;

MidiDevice.Info[] infos = MidiSystem.getMidiDeviceInfo();
for (int i = 0; i < infos.length; i++) {
if(testMode == 1) jTextArea1.append(infos[i].getDescription() + “\n” + infos[i].getName() + “\n” + infos[i].getVendor() + “\n” + infos[i].getVersion() + “\n\n”);
if(testMode == 2) jTextArea1.append(infos[i].getName() + “\n”);

if(infos[i].getName().contains(deviceName)) {
if(holdNo == -1) { // Only return first match
holdNo = i;
try {
ret = MidiSystem.getMidiDevice(infos[i]);
} catch (MidiUnavailableException e) {
JOptionPane.showMessageDialog(this, “Couldn’t Get Device:” + i, “Device Open Error”, JOptionPane.PLAIN_MESSAGE);
ret = null;
}
if(debug) {
if (ret instanceof Synthesizer) {
JOptionPane.showMessageDialog(this, deviceName + ” is a synthesizer!”, “Whee!”, JOptionPane.PLAIN_MESSAGE);
}
if (ret instanceof Sequencer) {
JOptionPane.showMessageDialog(this, deviceName + ” is a sequencer!”, “Whee!”, JOptionPane.PLAIN_MESSAGE);
}
}
if (! (ret.isOpen())) {
try {
ret.open();
} catch (MidiUnavailableException e) {
JOptionPane.showMessageDialog(this, “Couldn’t Open Device:” + ret.toString(), “Device Open Error”, JOptionPane.PLAIN_MESSAGE);
ret = null;
}
}
}
}
}
return(ret);
}

private void initMidiDevices() {
kProDevice = getMidiDevice(“KAOSSILATOR PRO 1 SOUND”, 0); // Try to open K-Pro
if(kProDevice != null) {
try {
kProRcvr = kProDevice.getReceiver();
} catch (MidiUnavailableException e) {
JOptionPane.showMessageDialog(this, “Couldn’t Open Receiver:” + kProDevice.toString(), “Receiver Open Error”, JOptionPane.PLAIN_MESSAGE);
}
initkProInstrumentList();
} else {
jComboBox1.setEnabled(false); // Deactivate K-Pro related buttons if no K-Pro
for(int i=0; i<11; i++) {
jBList.get(i).setEnabled(false);
}
}

for(int i = 0; i < aProMaxPorts; i++) { // Try to open A-300 Pro ports
a300Device[i] = getMidiDevice(aProPortNames[i], 0); // String: deviceName, testMode 1 = all midi name info; 2 = name only
if(a300Device[i] != null) {
try {
a300Xmtr[i] = a300Device[i].getTransmitter();
} catch (MidiUnavailableException e) {
JOptionPane.showMessageDialog(this, “Couldn’t Open Transmitter:” + a300Device[i].toString(), “Transmitter Open Error”, JOptionPane.PLAIN_MESSAGE);
}
}
}
}

 

A-Pro, Part 1


Ok, this entry is going to start out sketchy since I’m still in the WTF stage regarding communicating with the Roland A-300 Pro. As I get stuff to work, I’ll rewrite this entry as appropriate. If I’m lucky, I’ll be 100% functional by the time I have to publish this.


(Roland A-300 Pro)

I’d thought that opening up a transmitter to receive MIDI messages from a MIDI OUT device like a keyboard controller would be much like opening a receiver.  I admit that I wasn’t really paying attention to transmitters when I was reading the Java documentation on receivers, because if I’d had, I may never have gone out to buy the A-300.  Yes, the situation is even worse than what I faced when I tried to get the code to work with the K-Pro.

If you look at the Oracle Sound Tutorial, the discussion shows how to directly connect a transmitter to a receiver, and how to fork the transmitter to connect to both a receiver and a sequencer object.  There’s nothing about connecting the transmitter to some kind of an object that lets you see the MIDI messages as they come in and interpret them in some way.  And that’s exactly what I designed the K-Gater around as a midi mapper.  I wanted to set up a transmitter object to the A-300, play the A-300 keys, have the messages plop somehow into an array or some midi object, and then set up a switch-case to parse the messages and have them run jButton .doClick() operations for selecting instruments, changing the gate arp rate and ratio, and changing K-Pro volume.

Instead, the Transmitter object has NO METHODS WHATSOEVER for presenting MIDI messages from a MIDI OUT device to your Java app.  It’s like a great big hole in the ground.  Data goes from the keyboard controller straight to the software synth for direct playback, which is fine if you want the default software synth for the voices for your keyboard controller.  Not so fine if you want to change instrument voices with the controller. Or, pretty much if you want to do anything else. Further, there’s a 2 second latency from when I press the keys to when the notes get played (when I was using Sonar LE with the A-300 Pro, with the Windows software synth instruments, it was a 1/2 second delay).  I think this latency is built into the transmitter-receiver link, and isn’t simply a function of making sounds through software, like Roland claims.

The Oracle sound tutorial also shows that example of “forking”, where one transmitter drives two or more receivers.  The specific example is to have some random transmitter talking directly to the default software synth and the default software sequencer at the same time.

The clue to intercepting MIDI messages is in this fork code.  The sequencer needs a start and stop instruction for recording MIDI messages as a sequence.  My thought is that the only option is to start the sequencer record and check when the size changes, read the recorded message and parse that.  The sequencer uses “ticks”, the number of time slices between NOTE_ON and NOTE_OFF messages to enable the sequence playback.  But, since I want real-time control of the K-Pro, I can ignore ticks and just try to monitor the midi messages as they come in.  It’s still tricky, because you have to do the following:

Create the Sequencer object
Create a Sequence object
Create a track object
Open the Sequencer
Connect the Sequencer receiver to the A-300 transmitter
Enable the track
Assign the track object to the sequence, and record all channels to the one track
Start the Sequencer recording
Using a timer object, prepare to read the tracks every 10 ms to 100 ms
Stop recording
Check the track size and if it’s > 1, parse it
Zero the track
Start recording again

The above is for just one A-300 MIDI IN port.  There are three ports, so you have to do the above three times, with one sequencer-sequence-track combination per port.

Yuck.

You may have noticed that I started out using the term MIDI OUT for the A-300, then suddenly switched to using MIDI IN just now. You would think that if you connect a cable to the MIDI OUT jack of the keyboard controller, that this would be a MIDI OUT connection. You would be wrong. The second the data arrives at the computer, Roland refers to it as a MIDI IN port in their USB driver software running on your PC.

The Roland adds more of its own complications:
3 MIDI IN ports instead of 1
MIDI ports change names depending on the USB port you’re connected to
Most of the controls are reassignable

1) The Roland A-300 has LOTS of controls.  So many that it needs 3 MIDI IN ports to communicate them all, even though they’re still only going through one USB cable. Fortunately, with the default assignments, messages appear on only 2 of the three ports.

2) For some reason, Roland seems to think that we’ll be using more than 1 keyboard controller at a time.  If you plug the A-300 Pro into one USB port, the MIDI IN names are assigned as A-PRO MIDI IN, A-PRO 1 and A-PRO 2.  Plug the unit into a different USB port of the PC, and “2- ” or “3- ” get pre-pended to the names.  So, either you only use one specific USB port all the time, you strip out the pre-pended numbers, or you have to hard-code all 6 or 9 port name variants into your app.

2a) Roland doesn’t support USB splitter boxes. If you only have one USB port on your computer and it’s connected to your mouse, too bad.

3) You can program the MIDI number assignments of any control as you like.  It’s good from a flexibility viewpoint, but this makes parsing MIDI messages in your app much more difficult.  The A-300 Pro uses controller mapping files for specific applications, but I haven’t gotten far enough to figure out how to use those files for my own app.  In the meantime, I have to be careful to not screw things up when using the map for Sonar LE.

4) This one isn’t necessarily a complication, assuming that I can master controller maps.  But, the touch pads at the top right of the controller are initially mapped to the percussion instruments in MIDI channel 9.  If I want to use the pads for changing K-Pro instrument voices, I either need to turn them into control pads somehow, or I mask them out of the NOTE_ON/NOTE_OFF code (or both).

————————-

On a slightly different subject, I had been completely confused regarding the relationship between channels and ports.

channel: A MIDI channel is one of 16 communications “packets” that allows a controller to direct messages to a specific musical instrument.  These instruments can be external hardware like the K-Pro, or software simulators like the default software synthesizer that I’ve been using.  You identify which channel you want to control, put that channel number in the header of the “packet” and send the packet out (i.e. – via the hardware cable for an external device). Whatever device is set to that channel will then be the one that intercepts that packet and plays whatever notes you told it to.

port: A “port” is a connection that you talk to for communicating with part of the external instrument, usually through the software driver for that instrument. The port is independant of the MIDI channel, and is in fact, what you use to send MIDI data to the external hardware.  As an example, the A-300 has 5 ports: 3 for MIDI IN: “A-PRO 1”, “A-PRO 2” and “A-PRO MIDI IN”; 2 for MIDI OUT: “A-PRO” and “A-PRO MIDI OUT”.  Meanwhile, the K-Pro has two ports: “KAOSSILATOR PRO SOUND” and “KAOSSILATOR PRO PAD”. (I’m assuming that “PAD” is MIDI IN when using the K-PRO as a controller.  “SOUND” is used for controlling the K-PRO.)

The bottom line is that when you’re deciding which MIDI channel numbers to assign your instruments to, the number of ports the equipment has is irrelevant.  The K-PRO accepts one channel assignment, 1-16.  The software synthesizer has all 16 channels.  The A-300 Pro is the main controller for this system and doesn’t have MIDI channels assigned to it; instead, it sends MIDI message data via one of the 3 ports to specific channels of whatever external hardware or software devices you have plugged in and running.

 

K-Gater, Part 12


Ok, so that’s the K-Gater app. Time for a little recap.

The entire point of this exercise was to create a kind of midi mapper to drive the Korg Kaossilator Pro, which doesn’t support standard NOTE_ON, NOTE_OFF MIDI messages. And because I couldn’t find example code for talking to external hardware. All the tutorials and documentation start out discussing both external hardware and the default software synthesizer, but right at the most crucial point, they switch over to just talking about the software synth exclusively. I had to do a lot of trial and error to make that leap to a working external hardware connection.

Both start out the same way, by using the MidiSystem object. For external hardware, we need to know what’s plugged in and running, which will include the Windows synth support files if you’re on a Windows PC. We do this by getting an array of info objects. Note that it doesn’t matter to us if the devices are software-only, connected through MIDI cables or through the USB port.

MidiDevice.Info[] infos = MidiSystem.getMidiDeviceInfo();

MidiDevice just lets us display the vendor name, device name, description and version. If we want the instrument names or channel numbers, we need to open a Synthesizer object, if possible.

Next, loop through the info array and check if the vendor name or description matches the device we want. For the K-Pro, the name is “Kaossilator Pro 1 Sound”, where the “1” is the port number, if we enabled the “include port number in driver name” option. For the Roland A-300 Pro midi keyboard controller, there are actually 5 ports, for MIDI IN: “A-PRO 1”, “A-PRO 2” and “A-PRO MIDI IN”; and for MIDI OUT: “A-Pro” and “A-PRO MIDI OUT”. So, we need to check for all three MIDI IN ports in order to connect transmitters to them. (The following code fragment is for the K-Pro only.)

int kProDevice = -1;
for (int i = 0; i < infos.length; i++) {
if(debug) {
jTextArea1.append(infos[i].getDescription() + “\n” + infos[i].getName() + “\n” + infos[i].getVendor() + “\n” + infos[i].getVersion() + “\n\n”);
}
if(infos[i].getName().startsWith(“KAOSSILATOR PRO”)) {
kProDeviceNo = i;
}
}

If kProDeviceNo is -1, it hadn’t been plugged in or turned on at the time the app started. Since MidiSystem doesn’t seem to be refreshing properly on my computer, the only choice is to exit the app and try running it again in a few seconds, after making sure the K-Pro is on and connected. Otherwise, the next step is to create a MidiDevice object with the K-Pro info.

MidiDevice kProDevice = null;
if(kProDeviceNo > -1) {
try {
kProDevice = MidiSystem.getMidiDevice(infos[kProDeviceNo]);
} catch (MidiUnavailableException e) {
JOptionPane.showMessageDialog(this, “Couldn’t Get Device:” + kProDeviceNo, “Device Open Error”, JOptionPane.PLAIN_MESSAGE);
}
if (!(kProDevice.isOpen())) {
try {
kProDevice.open();
} catch (MidiUnavailableException e) {
JOptionPane.showMessageDialog(this, “Couldn’t Open Device:” + kProDeviceNo, “Device Open Error”, JOptionPane.PLAIN_MESSAGE);
}
}
initkProInstrumentList();
} else {
jComboBox1.setEnabled(false); // Deactivate K-Pro related buttons if no K-Pro
for(int i=0; i<11; i++) {
jBList.get(i).setEnabled(false);
}
}

Lots of exceptions being thrown here. Netbeans forces us to handle all of them, or it won’t run right. I think that some of the Java documentation assumes that the user is hand-coding Java and that the exceptions can be thrown without being caught. This is another issue that I have with the official documentation, which doesn’t always include exception handling.

Once we do the .getMidiDevice(info), we need to .open() the device. When we exit the app, we need to make sure we .close() it, or the link to the K-Pro will remain open after the app stops running. Same holds true for the A-300 Pro support.

To determine if the external device is a synthesizer or sequencer, we can query it with “instance of”. We can do this before doing .open();

if (kProDevice instanceof Synthesizer) {
JOptionPane.showMessageDialog(this, “K-Pro is a synthesizer!”, “Whee!”, JOptionPane.PLAIN_MESSAGE);
}

The K-Pro doesn’t support the synthesizer class, so I can’t get the soundbank from it. If it did, I could make a Synthesizer object, and use something like .getSoundBank(). Instead, I hand-typed the instrument names into an array, and I use my initProInstrumentList() method to copy the names from the array to a combo box to display to the user. The remaining code is just used to disable my combo box and K-Pro voice preset buttons if kProDevice is null.

The next part is what all of the official documentation skips. To actually talk to the external hardware, we need to open a receiver and/or a transmitter to it. If we’re sending MIDI messages to the device, we need a receiver. Which of course can throw another exception we have to catch. Note that I initially opened the receiver every time I wanted to send data to the K-Pro because that’s the way it’s shown in one of the examples. It’s inefficient, but the idea is that if the hardware can only support a limited number of receivers, you’ll allow another app to talk to the K-Pro in sequence this way. In fact, the number of receivers the K-Pro supports is unlimited, so I’m just going to open one receiver at the beginning and leave it open in the next version.

private void kProProgram(int no) { // Change K-Pro instrument
int bank = 0;
int nbank = 0;
int ch = 0;
ShortMessage myMsg = new ShortMessage();
Receiver rcvr = null;
long timeStamp = -1;
if(kProDevice != null) {
try {
rcvr = kProDevice.getReceiver();
} catch (MidiUnavailableException e) {
}
try {
bank = (no < 127) ? 0 : 1;
nbank = (bank == 1) ? 0 : 1;
ch = (no < 127) ? no : no – 128;
myMsg.setMessage(ShortMessage.CONTROL_CHANGE, midiPorts.kProChannelNo, 0, nbank);
rcvr.send(myMsg, timeStamp);
myMsg.setMessage(ShortMessage.CONTROL_CHANGE, midiPorts.kProChannelNo, 32, bank);
rcvr.send(myMsg, timeStamp);
myMsg.setMessage(ShortMessage.PROGRAM_CHANGE, midiPorts.kProChannelNo, ch, 0);
rcvr.send(myMsg, timeStamp);
} catch (javax.sound.midi.InvalidMidiDataException e) {
}
}
}

private void kProNote(int ch, int col, int row, int onOff) { // Change K-Pro note (x-y pad)
ShortMessage myMsg = new ShortMessage();
Receiver rcvr = null;
long timeStamp = -1;
if(kProDevice != null) {
try {
rcvr = kProDevice.getReceiver();
} catch (MidiUnavailableException e) {
}
try {
myMsg.setMessage(ShortMessage.CONTROL_CHANGE, midiPorts.kProChannelNo, 12, row);
rcvr.send(myMsg, timeStamp);
myMsg.setMessage(ShortMessage.CONTROL_CHANGE, midiPorts.kProChannelNo, 13, col);
rcvr.send(myMsg, timeStamp);
myMsg.setMessage(ShortMessage.CONTROL_CHANGE, midiPorts.kProChannelNo, 92, onOff);
rcvr.send(myMsg, timeStamp);
} catch (javax.sound.midi.InvalidMidiDataException e) {
}
}
}

Talking to the external device is just a matter of building up a ShortMessage and using receiver.send(). The type of message you send depends a large part on whether it’s the software synthesizer or external hardware. For the software synth, messages follow the MIDI spec, with ShortMessage.NOTE_ON, ShortMessage.NOTE_OFF and ShortMessage.PROGRAM_CHANGE being the primary ones. For the Kaossilator Pro, most messages are ShortMessage.CONTROL_CHANGE messages. For more typical external instruments, the messages will be the same as for the software synth. It’s important to remember here that the K-Pro doesn’t implement the Synthesizer class and therefore we have to send the messages the hard way. In the section below, we get to see what happens if a device DOES implement the Synthesizer class.

And, that’s it for external devices.

——————————————

In contrast, let’s look at the one part that all of the documentation agrees on – using the built-in software synth. This starts out the same, using the MidiSystem. .getSynthesizer() returns a pointer to the default synth. That’s it, it’s just that easy. You can then get the default soundbank, or load one that you want from a file or from a URL. The soundbank contains bank numbers and voice numbers, which need to be loaded to an array for later reference. And, because the software synth can be assigned to all 16 channels, we need an array for the channel-to-voice relationship.

Soundbank soundbank = null;
Synthesizer synth = null;
MidiChannel [] channels = null;
Instrument[] aInstruments = null;

private void readSoundBank() { // Load the default Java software synth
try {
synth = MidiSystem.getSynthesizer();
synth.open();
soundbank = synth.getDefaultSoundbank();
synth.loadAllInstruments(soundbank);
channels = synth.getChannels();
} catch (Exception ex) {
JOptionPane.showMessageDialog(this, “Couldn’t Open Soundbank:\n” + ex, “Soundbank Open Error”, JOptionPane.PLAIN_MESSAGE);
}
if (soundbank == null){
jTextArea1.setText(“no soundbank”);
} else {
aInstruments = soundbank.getInstruments();
for (int i = 0; i < aInstruments.length; i++) {
jComboBox3.addItem(aInstruments[i].getName());
}
}
}

Changing instruments takes place with the MidiChannel object .programChange() method:

channels[channelNo].programChange(aInstruments[voiceNum].getPatch().getBank(), aInstruments[voiceNum].getPatch().getProgram());

Turning notes on and off is just a matter of calling the appropriate Channel method. We just need to give the note number (0-127) and the keyboard velocity (i.e. – volume; 0-127) for note on.

channels[channelNo].noteOn(arp.currentNote, keyboardVolume);
channels[channelNo].noteOff(arp.currentNote);

The main point regarding this comparison is that if the external hardware implements the Synthesizer class, we can use these Synthesizer methods on it just as we do for the default synth. The difference being that we’d open a receiver to the hardware rather than using MidiSystem.getSynthesizer(). (I would give working example code for using an external hardware synthesizer, but I don’t own one.)

And that’s it for the Kaossilator Pro support.