Forum rss-feed

Forum

Developers: VST Audio Proxy Agent

Most Recent

written by: carvingCode

MarkPowell said:
Hi Randy,
I've done the same into Live (in fact I told someone else how to do just that on another thread this morning,) but the problem with doing that is that all instruments get recorded onto the same two channels. What I'm trying to do it maintain separation between each instrument so that I can change things after the event whilst still being able to jam something out and not compose each part separately.

Thanks,
Mark.


Right, if any of the tracks are playing at the same time while recording, they also get recorded.

Interesting problem you have. Sounds like it will be very expensive to solve. :)

Randy


written by: MarkPowell

Sat, 5 May 2012 16:19:21 +0100 BST

Thinking through what some people are doing with EigenD in terms of routing into DAWs, it occurs to me that what would be extremely useful is a proxy Agent that can take an audio input within a WorkBench setup and then redirect it to a VSTi sitting in the DAW, where it can then be recorded the same as any other virtual instrument. The proxy agent could then have an audio output that passed the input straight back out again, thus making it essentially transparent in EigenD. That way you could have a setup that was played exclusively from EigenD, but where each individual instrument (or indeed any audio feed) could be passed to a separate instance of the VSTi, each sitting in a different track in your DAW. This would allow each to be recorded and arranged after the event.

I'm guessing that, due to the hassle with establishing reliable inter-application communication and synchronisation, this is something that seems easy, but would actually be a complete nightmare to achieve. Could anyone with more technical knowledge advise just how realistic (or otherwise) this would be to achieve?

Thanks,
Mark.


written by: MarkPowell

Sat, 5 May 2012 16:20:18 +0100 BST

If this was possible (huge emphasis on the word 'if') then the next logical step would be an equivalent that worked the other way; a VST effect that you could put into your DAW to route the audio into EigenD.


written by: carvingCode

Sat, 5 May 2012 16:39:45 +0100 BST

It is possible to setup multiple VSTs in a DAW and have the Eigenharp control them. Splits can even be set up to control multiple VSTs simultaneously, although I prefer one at a time as I'm only recording one voice at a time.

What advantage would a native EigenD agent have?

Randy


written by: mikemilton

Sat, 5 May 2012 19:17:44 +0100 BST

@Mark - I've not tried it, but Plogue Bidule running as a plugin in your DAW might do what you want.

http://www.plogue.com/products/bidule/

Some of us used it in EigenD in early days but this might be an interesting application if it can be made to work. Particularly if it can be made to support ore than 2 channels.

@Randy - Seriously?


written by: geert

Sat, 5 May 2012 19:20:02 +0100 BST

There are several solutions for this already, ie. JackOSX for instance, or the standard Apple NetSend and NetReceive. Jack is for me more unstable than SoundFlower, but I've used NetSend/NetReceive for successfully across my home network, even on WIFI.


written by: MarkPowell

Sat, 5 May 2012 19:57:08 +0100 BST

Hi Mike,
Thanks for the PB suggestion. I'll have a look and see if it gives what I want.


written by: MarkPowell

Sat, 5 May 2012 20:03:34 +0100 BST

Hi CarvingCode,
I presume you mean using MIDI to control the VSTs? This does have advantages, since what you record can then be manipulated in the DAW, but it would be at the expense of some level of expression. It may be the placebo effect, but playing a vst in EigenD seems more responsive. Having a way to filter the audio out of EigenD to record it when required, but have it coming from while playing would also mean I could use the same setup regardless of whether I wanted to record, with the reduced resource usage if I didn't. MIDI is a good suggestion though and it's certainly my 'plan b', but the other disadvantage of that is the same approach wouldn't work for the sampler instruments in ED.


written by: MarkPowell

Sat, 5 May 2012 20:17:56 +0100 BST

Hi Geert,
I've tried Jack, but like you found it unreliable. Not looked at Netsend though - does that allow audio or is it just MIDI over Ethernet?

My first attempt at doing what I wanted was to remove the mixer from the standard Alpha setup and have each instrument rig feed directly into a separate pair of Soundflower16 channels. This actually worked pretty well, but after three or four separate feeds started to get a lot of latency when monitoring via Live. My other plan is to look at a MaxForLive setup that allows me to swap the recorded track in live, plus send belcanto to ED and reroute the appropriate instrument to the loopback in my Saffire for recording in Live whilst playing through ED. I think this will work, and I'm already doing similar to select ED instruments/scales with my Monome via M4L, but my thinking was that a simple agent that you throw into the middle of an existing audio feed in workbench, plus a VST that picks that audio stream up in Live would have been much easier to work with on the fly. I'll check out Mike's suggestion of Bidule and see how I go.

Thanks to you all for your suggestions.

Cheers,
Mark.


written by: carvingCode

Sat, 5 May 2012 22:19:37 +0100 BST

I've recorded Eigenharp output directly into Reaper, by selecting the main I/O channel as the input. Even recorded the piano sound.


written by: MarkPowell

Sun, 6 May 2012 00:24:54 +0100 BST

Hi Randy,
I've done the same into Live (in fact I told someone else how to do just that on another thread this morning,) but the problem with doing that is that all instruments get recorded onto the same two channels. What I'm trying to do it maintain separation between each instrument so that I can change things after the event whilst still being able to jam something out and not compose each part separately.

Thanks,
Mark.


written by: dhjdhj

Sun, 6 May 2012 03:10:59 +0100 BST

For what it's worth, I've been doing this for some time but rather than using M4L, I've been feeding raw OSC from EigenD into Max. From there, I create the desired keyboard splits, tunings and other MIDI processing, and then I either play live into multiple VSTs hosted by Max or I just send the processed MIDI data into my DAW (digital performer) using multi-record so that each MIDI stream from Max goes to a separate track. Each track is associated with the desired VST or AU and can be recorded. I coud probably just feed the audio channels from Max directly to the DAW with Rewire but haven't tried that.


written by: carvingCode

Sun, 6 May 2012 17:05:16 +0100 BST

MarkPowell said:
Hi Randy,
I've done the same into Live (in fact I told someone else how to do just that on another thread this morning,) but the problem with doing that is that all instruments get recorded onto the same two channels. What I'm trying to do it maintain separation between each instrument so that I can change things after the event whilst still being able to jam something out and not compose each part separately.

Thanks,
Mark.


Right, if any of the tracks are playing at the same time while recording, they also get recorded.

Interesting problem you have. Sounds like it will be very expensive to solve. :)

Randy




Please log in to join the discussions