Topic: About using AI technology to tune model to fit one particular piano...

Hi

This is a mesage for development team.

I suppose that you spend a lot of time to tune every new engine/model (pianoteq version) to the different instrument target.
I suppose that to a certain extend, this is a manual process, as recognition of sound "character" requires a king of human intelligence.
So, I am curious to know how much effort it take to you to test (try and listen instrument, and asses sounding) compare to dvpt effort (mathematical formula, and SW coding).

Then finally my question.
Do you use, or consider using, AI (machine learning, sound recognition) to automatize the tuning and test part of your development process to save your tuning time ?

Can we discuss this idea, I am so curious, I would like to understand what is sound recognition, and applicability of this idea in your dvpt context...

Re: About using AI technology to tune model to fit one particular piano...

Ok, not an inspiring subject ? Perhaps because we are lacking some entry point to start discussion.
Look at this article for example :
https://medium.com/@awjuliani/recognizi...c37444d44d

Re: About using AI technology to tune model to fit one particular piano...

Another article :

https://medium.com/@ageitgey/machine-le...293c162f7a

Look at the first figure, imagine the input is the piano recording you want to model, and the output, the list of values for your engine that approximate the best this sound....

Last edited by ziczack (01-10-2017 22:26)

Re: About using AI technology to tune model to fit one particular piano...

The future is irresistibly coming! The intelligence intellect is now ramping up at an incredible pace and it is impossible to be indifferent. Overcoming the complexity of providing information about sound to artificial intelligence is an interesting task. I'm dreaming a little .. you can connect midi-data from the PNOscan installed in the piano with the received sound. Install several such devices in the conservatory and put the students to practice for hours .. And after learning to get a tool that will create the timbre of the listening instrument receiving midi messages. And midi messages can be generated directly from the brain with the help of a scanner of our brain's brainwaves (is it worth buying a midi keyboard or is it better to purchase a scanner and train the brain directly, without the participation of physical movements ????).
This technology can come Pianoteq on the heels (Modartt! Be vigilant and relevant!)
(it will be fun that you will need to train the keyboard for yourself))
Can the format "dsd" be simpler for decrypting the sound?

Last edited by scherbakov.al (30-09-2017 17:43)

Re: About using AI technology to tune model to fit one particular piano...

welcome ziczack to the forum!

You have entered here right off with a hot and interesting subject. In our offices we have people that are very much into deep learning, but they are not doing musical stuff. So far the tuning work for a given instrument is automated for the roughest approximation, and the final tuning is human-made, as it is not a purely mechanical work, but requires some aesthetic judgement which is not easy to put inside an objective function of a neural network. In other words, instead of using artificial neurones, we use human neurones, which could become one day fairly outdated .

Re: About using AI technology to tune model to fit one particular piano...

Hi Philippe,

Thanks for your kind welcoming words ! true I am a new poster, but I come to read the forum from time to time, for a long time. :-)

And thank you for your answer. So I undersatnd, you use automated procedure for "rough tuning". Probably a kind of "brute force" approach, I imagine that you have a "MIDI test" procedure, and you "explore" all (key) the parameters combinatory, and select the best candidate (or best promising approaches), that is best fitness score against target sounds. I speculate, that propably you can make some "fourrier" analysis of input sound and output sound, to measure "signal" proximity. Work for a "sample", but fitness should also consider series of samples (as sound evolve with time). Probably htere are other methods (looks like a complicated prblme to me,) but anyway this could represent a lot of computations.

Do you need some kind of "super computer" or  "cloud computing power" for this...

Considering you last sentence : "requires some aesthetic judgement which is not easy to put inside an objective function of a neural network". I am not a specialist, but I have seen how image recognition is based on training network by a set of "good response" set (tagged values describing images). So if we have a data set of "sounds" coming from known pianos, we coud train it to recognize sounds. So the network could say, I recognize this sound as the A piano at x%, B piano at y%. Then chnaging a parameter of the engine, you could see, how it "close the gap" to a particular piano.

The funny thing, is that you could use "sampled pianos" to genrate your dataset !

Makes sens ?  or perhaps the recognition is too complicated (as for speech recognition, where 95% to 99% is the difference between uselles and usefull recognition, but this success ratio is almost impossible to achieve and requires so much data to converge). In that case, manual process could appear like the best effort/reward approach...

Last edited by ziczack (01-10-2017 21:32)

Re: About using AI technology to tune model to fit one particular piano...

https://arxiv.org/abs/1605.09507

https://arxiv.org/pdf/1605.09507.pdf
http://ieeexplore.ieee.org/document/7755799/
[
Deep Convolutional Neural Networks for Predominant Instrument Recognition in Polyphonic Music

Abstract :

Abstract:
Identifying musical instruments in polyphonic music recordings is a challenging but important problem in the field of music information retrieval. It enables music search by instrument, helps recognize musical genres, or can make music transcription easier and more accurate. In this paper, we present a convolutional neural network framework for predominant instrument recognition in real-world polyphonic music. We train our network from fixed-length music excerpts with a single-labeled predominant instrument and estimate an arbitrary number of predominant instruments from an audio signal with a variable length. To obtain the audio-excerpt-wise result, we aggregate multiple outputs from sliding windows over the test audio. In doing so, we investigated two different aggregation methods: one takes the class-wise average followed by normalization, and the other perform temporally local class-wise max-pooling on the output probability prior to averaging and normalization steps to minimize the effect of averaging process suppresses the activation of sporadically appearing instruments. In addition, we conducted extensive experiments on several important factors that affect the performance, including analysis window size, identification threshold, and activation functions for neural networks to find the optimal set of parameters. Our analysis on the instrument-wise performance found that the onset type is a critical factor for recall and precision of each instrument. Using a dataset of 10k audio excerpts from 11 instruments for evaluation, we found that convolutional neural networks are more robust than conventional methods that exploit spectral features and source separation with support vector machines. Experimental results showed that the proposed convolutional network architecture obtained an F1 measure of 0.619 for micro and 0.513 for macro, respectively, achieving 23.1% and 18.8% in performance improvement compared with the state-of-the-art algorithm.
]

https://www.toptal.com/algorithms/shaza...ecognition
[
Shazam It! Music Recognition Algorithms, Fingerprinting, and Processing
]

Last edited by ziczack (01-10-2017 22:05)

Re: About using AI technology to tune model to fit one particular piano...

More radical approach, neural net trained to "re-produce" sounds.
In this case, no more piano physical modelling, but a synthesizer piloted by a neural net, trained to mimic a sound source :

https://deepmind.com/blog/wavenet-gener...raw-audio/

https://arxiv.org/pdf/1609.03499.pdf

Fascinating...

Last edited by ziczack (04-10-2017 08:18)

Re: About using AI technology to tune model to fit one particular piano...

Very fascinating topic!

Can we get these algorithms perfected and let them loose to achieve Bosendorfer Concert Grand 290 Imperial sound already? .

Seriously though, the future is very exciting as deep learning technology matures and hardware computation capability increases and computation cost reduces.

Osho