Forum rss-feed


Developers: MIDI key illuminator

Most Recent

written by: NothanUmber

Ok, here a Windows version of the MIDI Illuminator agent for EigenD 1.4.12. Please have a look at the contained Readme.
If it doesn't work just come back to me and I can have a look.
(As far as I can remember, there were problems with very big setups - like the Alpha3 setup. Because the way layouts are handled is more on the memory intensive "brute force" side. Here it might help to experiment with the "light" versions (also included - but not fully tested if I remember right), that use less memory - but also offer fewer features.)

Also added the complete sources in the state I left it. (Not prettiest possible, this was mainly for learning how things work. Wanted to clean it up but never did... - but it should work.)
P.S.: Forgot to add the Stage tabs. If the folder should be empty - please redownload

written by: NothanUmber

Sun, 9 Jun 2013 22:53:50 +0100 BST


was thinking about building an agent that takes midi as an input and illuminates the keys on the keyboard according to the selected layout and scales. DIfferent voices should be visualized in different colors. This should be a helper app to learn to play pieces on the Eigenharp for which you have MIDI scores.

Now theoretically what one could do is to build one monolithic "midi illuminator" agent that takes midi data and outputs (key number, led color) infos that can directly be fed into the kgroup.
That agent would be a "one trick pony" though.
So perhaps it would be better to decompose the task into smaller agents that can be reassembled to perform other tasks. E.g. one could build
* a midi->note stream agent which adds a voice information to the note metadata which can be used later on
* some kind of "inverse scaler" that takes a note pitch and outputs the according key number
* and the (now smaller) key illuminator that uses the key and voice number information to light a LED on the keyboard

As far as I remember in one of the seminars Dave (or was it John) spoke about a new upcoming concept that introduces key position as something that can be transferred between agents alongside the pitch information. Is this already realized? Could it be used for what I'd need or would I have to introduce key numbers (and possibly also voices?) as custom metadata myself that is created and understood by my agents (only atm.)?

Here a diagram that shows a glimpse overview for what I mean.


Edit: Link to least outdated version:
MIDI Illuminator for EigenD 1.4.12 (Mac)
MIDI Illuminator agent for EigenD 1.4.12 (Win)
This is still for EigenD 1 - an EigenD 2 version is planned to be done by the end of the year.

written by: NothanUmber

Sat, 15 Oct 2011 21:44:30 +0100 BST

Think I found the key position support, it was added to the v2 branch some time ago (027d86718d437830ee03e04611e75d3a0c95c6f3). Pretty big changes, porting that back to v1.4 won't make sense I fear... Will at least start to have a deeper look, the future is presumably nonetheless v2...

written by: geert

Sun, 16 Oct 2011 15:08:24 +0100 BST

Hi NothanNumber,

The new keystream in v2 is one of the reasons why it will not be backwards compatible, so port to 1.4 is not a good idea. Also, it's very far reaching and has had some more improvements since that changeset, it'll be very difficult to just extract the diffs relevant to that.

However, you don't need the key positioning to do what you want to achieve. You can just write a lights agent that talks directly to the keyboard agent. If you look at plg_keyboard/, there's a 'light input' that takes a statusbuffer which contains a bitmap of the key status information of the entire keyboard. This is then based on each status converted into a key color.

If you write an agent that takes has inputs to receive data from a MIDI input agent and outputs a status buffer based on that, you should be able to hook it up directly to a keyboard.

The difficulty will be to have this interact well with other agents that send statusbuffers to the keyboard also. You'll have to a statusmixer_t in between to blend to signals together. An other option is an atom that allows you to switch between different statusbuffer datastreams, we have the selector_t that does this.

Hope this gives you some pointers to get started as it's a great idea!

Take care,


written by: NothanUmber

Sun, 16 Oct 2011 16:15:20 +0100 BST

Hi Geert,

thanks for your comments! The key position thing was just an idea to be able to decouple the "inverse scaler" (see diagram linked above) from the "key illuminator" as I could imagine the iscaler could also be helpful in some other situations where you want to do some things with the key to which a given pitch is currently bound to - e.g. for graphical keyboard simulations(*), or iscaler+key illuminator could be used to place helper lights on the keyboard for arbitrary layouts/scales etc.

But presumably you are right, starting with the "monolithic" midi illuminator would allow to work with 1.4 for the time being and could later on be refactored into smaller agents when the v2 internals are more settled.
Sounds like a good plan!

(*) Pico/Tau/Alpha etc. simulators would probably be a good idea in any case to give developers the opportunity to test setups without having to own every Eigenharp model in existence.


written by: NothanUmber

Mon, 17 Oct 2011 10:26:32 +0100 BST

Is there a convenient way to get hands on the key to pitch mapping?
As far as I understood currently the concept works like this
keyboard: send raw key numbers
kgroup: apply base note +octave transposition
scaler: apply scale
As this stream seems to be unidirectional in the keyboard->scaler->instrument direction I don't see a place where I could extract a final pitch to key mapping (with v2 I could build me one by forcing the player to play all keys at least once and building me a pitc to key map with an agent behind the scaler).
Do you see an easy way to extract this mapping so players could continue to use the courses/scales concept when using the midi practive lights agent or would I have to shortcircuit the scaler, transposer concept and build a (presumably simpler) implementation on my own (e.g. a textfile that contains a keyboard with midi note numbers for each key , an initial transpose offset and transpose up/down inputs etc.)


written by: NothanUmber

Tue, 18 Oct 2011 19:29:00 +0100 BST

Had a more in deepth look at some agent implementations yesterday and have a slightly better understanding of the internals now - still have some open questions though. From what I saw there are four alternatives to achieve the pitch to key mapping functionality in the midiilluminator agent:

1) The midiilluminator agent ignores scales, transpose values and base note from the scaler and courses from the kgroup and just uses it's own mapping (e.g. from a text file that contains a key number for each note pitch). Then you'd have to change this mapping whenever scale, courses or transposition etc. are changed - easiest to implement but least flexible

2) extend the kgroup and scaler agent by introducing additional inputs/outputs and methods. Those could directly access the state information inside the scaler and kgroup then in order to calculate the key which should be lit. Advantage: every scale, course, transpose settings supported. Disadvantage: Two essential agents (scaler+kgroup) would become more complex for a feature that not everybody might need

3) Add scale, course, transpose etc. inputs and set/add etc. verbs to the midiilluminator. Then each agent who connects it's output to an input of the kgroup or scaler would also have to connect to the input of the midiilluminator so it also gets reconfiguration infos. (The same would apply to talkers, they would also have to send commands directed to scaler/kgroup to the midiilluminator). Based on the assumption that the initial configuration is the same this would keep the mapping in sync without having to read the state of the scale+kgroup. Question: Is it possible to build groups of agents that act as one agent to the outside? (So if e.g. somebody connects to a kgroup/scaler input the information is automatically also sent to the midiilluminator. This would presumably mean that one agent would have to be able to be part of several groups - the same midiilluminator instance would be in the kgroup+m.i. and scale+m.i. group)

4) Let the midiilluminator agent introspect the atoms of the scaler and keygroup to retrieve the necessary scale/course etc. information which is not explicitly exposed via outputs. (That implies that all necessary state infos are already accessable as atoms, which is presumably the case - otherwise that would have to be changed in the scaler and kgroup first). Information I'd need for that: Is it possible to connect to a "root" atom, so I can essentially read the whole state tree of a foreign agent instead of just to one leaf atom? So all one would have to do as a Belcanto user would be to connect the involved scaler and kgroup to the midiilluminator to feed in all their state data into the latter. If this works then it would presumably be the preferrable solution.


written by: barnone

Tue, 18 Oct 2011 20:12:39 +0100 BST

1 would be a good start. I'd really like to see an OSC agent that takes messages to light the keys in general as a first step.

Using the pattern from the OSC output agent

/keyboard_1/led/7 [0..3]
0 = led off
1=led amber
2=led red
3 = led green

I'd be willing to help with that.

7up actually does this key lighting by default. When playing back recorded patterns, when receiving midi input and even when transposing on the fly. Some might be able to be adapted or at least some ideas taken.

What's really cool, is you can play a pattern say in a major scale and record it, then change to a different scale, and the recording will remap to that. I think this is also what you are after and it's extremely valuable for learning. I highly approve.

[edit] actually, you already have all that code, given that EigenTab is very capable in this area. Would love to connect EigenTab to 7up as well. Seems it would not be hard.

written by: geert

Tue, 18 Oct 2011 19:57:44 +0100 BST

@NothanNumber sorry for the slight delay in reply. It's not possible in EigenD to do the key to scale mapping in the reverse fashion without major work. Remember that key groups are hierarchical and that they contain courses and course offsets, which are then used for the scales. Its best to wait for the scale system to be extended with semantical meaning per position and reverse key support. This goes together with the physical and musical key support that was already added to EigenD 2.0 as key streams.

In the meantime I think that the simplest approach is the best and will still allow a lot of interesting applications to be developed. So I suggest you look at possibility 1 in your list of ideas.

written by: NothanUmber

Tue, 18 Oct 2011 21:06:53 +0100 BST

Hi Geert and barnone,

thanks for your comments! :)

@Geert: No problem regarding the "delay", I really appreciate every answer I get from the higly busy Eigenlabs dev team but don't "expect" any - especially not on people's free days ;)
@barnone: If I understood your description right then this "7up" agent would actually already do all I wanted! Couldn't find it on first sight in the 1.4 sources though - can you please give me a hint into the right direction?
Adding a raw OSC->LED connectivity sounds also quite interesting, if the "midiilluminator" already exists in the form of "7up" then I'd gladly try to join here! (Also have Bidule and Max, so we can provide basic wrapping patches/schematics)

[edit] A google search later I think I understood that I misunderstood you :)
With "7up" you presumably meant "SevenUp Live 2.0" which was meant more as an example than a ready-to-use EigenD agent? (Neither have Live nor a monome so I didn't know about this program :) )
If that is the case I'd propose that I continue implementing variant 1 as suggested by Geert and you (already started with the basic structure for that yesterday as I saw no easy solution for the other implementation alternatives). Then it should be viable to add OSC server functionality later on that directly accesses the LED-control part as an alternative to MIDI or we could build a second agent that resuses relevant parts of the midiilluminator. (Presumably you would like to do eventual mapping directly inside e.g. Max instead of using the simple configfile-driven mapper in the initial midiiluminator?)

written by: NothanUmber

Sun, 23 Oct 2011 16:12:48 +0100 BST

Have time to continue with the midi illuminator once again :)

Things are coming together nicely, basic framework and layout management etc. are already working. (Do not use a config file now but keep all layouts in the atom tree structure so you can configure layouts, set colors directly for absolute keys or mapped notes (whereever they are for a given layout for marker lights), define the priority of midi lighted keys, direct keys, direct notes and enable/disable individual colors (so they can be used by another agent) via Stage/Belcanto :) )

Originally I wanted to highlight the incoming midi in different colors depending on channel, so by sending left hand notes on another channel than right hand notes you could distinguish them visually. One backburner in that direction (code from the midi input):

void decoder_noteon(unsigned channel, unsigned number, unsigned velocity)
//pic::logmsg() << "note on " << channel << "," << number << "," << velocity;
unsigned long long t = piw::tsd_time();

void decoder_noteoff(unsigned channel, unsigned number, unsigned velocity)
//pic::logmsg() << "note off " << channel << "," << number << "," << velocity;

So the channel information is lost in the encoding process :/
Any plans to change that? (Don't want to change the encoding myself for the moment because that would have an impact for all midi receiving agents...)


written by: NothanUmber

Sun, 23 Oct 2011 16:22:29 +0100 BST

Btw.: Currently the "c" part of the blob_t key info structure is always set to 0 when sending key info. Is any agent relying on that behaviour?
If no, could I *hack* the midi input in a way that I send the channel info in the "c" variable? (For CC messages blob_t::k is always 0 so that shouldn't interfere..) That could probably work...
Will implement the midi illuminator in a way that it interprets blob_t::c as channel for the mean time with 0 being channel 1 - so it also works with the original midi input agent. If you add channel info to the blob later on that's only one line that would have to be changed on my side :)

written by: barnone

Sun, 23 Oct 2011 16:40:19 +0100 BST

Looks promising....keep it up!

written by: NothanUmber

Tue, 25 Oct 2011 00:31:52 +0100 BST

Not unexpected but nonetheless rather high - the new agents eat RAM for lunch... about 60 MB per instance for the alpha variant and still 12 MB for the pico one - holding 20+ layouts with ~3000 atoms total (alpha) and additional maps in the C++ code does not come cheap - having everything accessible is handy though. Will presumably nonetheless stay with this approach for the moment, at a later not-so-experimental stage one might think about distinguishing between offline layouts that have to be reloaded from a file before they can be used and live-layouts that can be manipulated via Belcanto (even while currently not active) - in the extreme case you only need one live-layout - the one that is currently active.

written by: barnone

Tue, 25 Oct 2011 01:06:23 +0100 BST

We've had house guests last few days. I'm dying to get some time to dig in.

really all I'm doing right now is cheerleading. ;)

written by: NothanUmber

Tue, 25 Oct 2011 18:33:51 +0100 BST

Hi barnone,

as far as I can remember you mentioned that you are interested in helping out with OSC support. If you find time just say so, the OSC fans would be thankful I guess :)
Just had some first thoughts into that direction. As almost all vital information is exposed via atoms now perhaps it would make sense to have a deeper look into the brpc utility and look how this is collecting a list of all available agents and retrieves current values of all their atoms (dump). If we would have a comparable OSC input agent this could be used to expose a big part of EigenD configuration options via OSC by just adding one (presumably not "really" trivial) introspecting and delegating agent.
This would work for simple data types - essentially all things that you usually set via Belcanto/talkers, so we wouldn't even need "auto-plumbing", as changing these config-atoms usually does not need too high bandwidth the interpreter could be used.
For writing actual data streams we'd presumably still need specialized OSC agents - finding out what data format is expected inside blobs or vectors is already difficult for humans sometimes :)
But at least for the midi illuminator that wouldn't even be neccessary I think.

Here the currently planned configuration options exposed via atoms for the
alpha/tau/pico midi light controller. All of those should be OSC-automatable with the "generic" OSC-input approach (I think?)
(Renamed the agents to only use existing belcanto words to avoid frequent merge conflicts in the belcanto language file every time Eigenlabs invents a new word exactly at the spot where I placed "illuminator" ;) ):

* current layout [1-20, default: 1]:
number of layout that is currently used for note to key mapping
* enable: [True/False, default: True]: 
if set to False all illuminator controlled lights are disabled (e.g. to use menus etc.)
* green/orange/red lights 
(status flags that can be used to disable a feature for a certain color - e.g. to concentrate on one hand via midi, disable certain static helper lights to reduce confusion etc.; one group exists for each color)

- enable direct keys [True/False, default: True]:
show green/orange/red lights for keys as mapped in the "keys" group

- enable direct notes [True/False, default: True]:
show green/orange/red lights for all keys that are mapped to the notes in the "notes" group

- enable midi [True/False, default: True]:
show green/orange/red for all keys that are mapped to the notes that are currently played via midi
* key to note maps 
(group of groups of keys [note number, default: chromatic vertical 4xn layout (default Eigenharp layout)] These layouts determine the key->note (or note->keys) mapping that is used to determine which keys are lit by incoming midi notes or notes set in the "notes" group

- current layout
(this layout is always a copy of the layout indicated by the "current layout" atom - this is helpful so you can maintain all layouts in one Stage tab instead of needing a tab per layout)

- key 1
- key 120(alpha)/72(tau)/18(pico)
- layout 1
- layout n
* keys [disabled/green/orange/red, default: disabled]
(group of "direct keys" which can be used to statically assign colors to keys independent of note pitch

- key 1
- key 120(alpha)/72(tau)/18(pico)
* midi channels
(defines which colors should be used to visualize which channels [disabled/green/orange/red, default:disabled])

- color for channel 1
- color for channel 16
* notes [disabled/green/orange/red, default: disabled]
(list of all midi notes. This can be used to statically visualize certain notes - e.g. to visualize tonics/IV/V etc.)

- 1: c0
- 128: g10
* priorities
(in the case that midi/direct key/direct note all want to light up the same key you need to have a priority [1-4; default: midi > direct notes > direct keys]. Prio 1 wins):

- direct keys prio
- direct notes prio
- midi prio
* transpose [-120;120; default: 0]
number of semitones that the layout should be transposed (has an impact on the visualization of midi notes and direct notes but not on direct keys)


written by: barnone

Wed, 26 Oct 2011 04:00:08 +0100 BST

I'm totally with you!

The good news is that I'll be looking into all this shortly. The bad news is that I'm away on business for a small bit.

The other good news is that I now have a MAX patch that interfaces with the OSC agent and provides direct highres OSC to CV control of a modular. And it's not just monophonic, you can use it monophonically, or polyphonically and get all the expression of the eigenharp per note.

It just takes 5 cv outputs PER polyphony level to capture all outputs (pitch, gate, pressure, yaw roll). So I definitely will be using this mostly monophonically and sometimes duophonically and more rarely polyphonically (as in 3 notes). Yeah 3 notes takes 15 audio channels. Although other OSC to cv options exist as well.

This was a dream of mine getting the eigenharp and connecting it to analog gear and it's now realized.

Open Source can lead to good things. Thx eigenlabs.

Video is burning as we speak.

written by: barnone

Wed, 26 Oct 2011 04:56:25 +0100 BST


Higher Video

written by: geert

Wed, 26 Oct 2011 10:20:48 +0100 BST

Hi NothanUmber,

Catching up here, the blob_t structure in midi_input.cpp is solely used there. It's a hand-over data structure to enqueue data on the fast queue. The only method that will consume it is thing_dequeue_fast in the keyboard_t class in midi_input.cpp. I think you can add an addition member variable without any danger.

I've been looking at the memory usage of the agents recently, it's very difficult to reduce it, but it's an ongoing effort right now. I hope I'll soon make good progress with it.

I'll take a look at your proposed atom structure later.

Great work! Really happy to see this all taking shape :-)

Take care,


written by: NothanUmber

Wed, 26 Oct 2011 22:44:38 +0100 BST

Ok, most things should be implemented now. Could test the stage side which works (ok, for that only the Python part has to work). Not sure how far I am away from "done" though as I am currently struggling atm. to hook the thing up to the keyboard agent.
As far as I understood there are two variants for plumbing two agents together.
a) Some kind of auto-plumbing where inputs and outputs with the same name and the same data type are automatically connected. As the input on the keyboard agent side is called "light input" it didn't strike me as the best idea to name my light output "light input" too - that would be misleading. So auto-plumbing won't work, right?
b) you explicitly specify the slots that should be connected. Somehow first attempts don't seem to work though.
Here my attempt:

tau midi light controller create
midi input create
alpha manager create
*wait* (assuming from Gert's midi script the plumber doesn't explicitly need to be created?)
plumber hey tau keyboard 1 light input to tau midi light controller 1 lights matrix output connect
=> failed: inappropriate arguments for the verb connect

When I type

plumber hey tau keyboard 1 to tau midi light controller 1 connect

I get "plumber 1: incompatible"

The latter is as expected because auto-plumbing won't work because of the different names. In the first case I would also have expected something along the lines of "incompatible format" when the formats don't match. This error message hints more into the direction of a general syntax error.
So before I search like mad why the format of my output could be incompatible to the lights input of the keyboard: Is the syntax ok? In which cases can I get the error message I get?

Another question:
Currently the leds will presumably only update when some kind of midi event comes in, otherwise no event is generated and thus no wire is there.
All the configuration properties that are mapped to atoms are in the pImpl-class (that hides the implementation details of the main agent class) though, so the question: What is the "usual" way to trigger changes from the pImpl class in order to update the leds if e.g. the layout changes but no additional midi notes come in? (I am already in the realtime context when setting the properties via fastcalls). Presumably I have to create an event? (Just creating me a wire instance, and calling event_start won't be the intended way I guess?)


written by: NothanUmber

Wed, 26 Oct 2011 23:05:55 +0100 BST

Argh, found my mistake, who can read is sometimes in advantage... the auto plumbing searches for X output to X input. Renamed "lights matrix output" to "light output", now auto plumbing seems to work on first sight. (at least I got an ok from the Eigencommander...)
Still don't understand why the manual setup steps posted above didn't work - perhaps it's related with the policy=self.led_input.vector_policy(1,False,False,auto_slot=True) auto_slot part of the definition of the light input on the keyboard side that only allows auto-plumbing at all?

Now I can have fun trying to get everything to work in the next days in the not completely unlikely case that there are some bugs/wrong assumptions :)

Please log in to join the discussions