La demo du Synthé Vocal Techni Musique pour C64 (mp3)

un pet, un vic, un 64...

Modérateur : Politburo

Répondre
Avatar du membre
stamba
Fonctionne à 1200 bauds
Fonctionne à 1200 bauds
Messages : 705
Enregistré le : 29 mars 2005 11:30
Localisation : Bordeaux - FRANCE
Contact :

La demo du Synthé Vocal Techni Musique pour C64 (mp3)

Message par stamba »

Edit du 30/04/2008 !
Voila quelques photos :

Image
Image
Image
Image

et les quelques infos que j'ai mis sur mon blog au sujet des periph audio/midi sur c64 : http://www.deviationsociale.com/synths/

Here are the 3 files i’ve been able to backup from the floppy which came with the unit :
http://stamba.free.fr/c64/tm/Techni_Mus ... o_disk.rar

It seems these files are only the demo disk but a Yago from #c-64 on ircnet helped me to understand what the program do and what could be typed to input datas to the vocal synth.



It uses phonemes and I don’t have the phonemes list so it will be quite hard to find them out !
Have to check the list shown in the demo.. maybe it’s the phonemes list !

Here is what you have to do while running the demo to input your own datas :
[16:57] <yago> well, poke stores a value into memory, and sys and usr call machinelanguage programs
[16:59] <yago> after then program started, stop it, and try to enter e.g. line 24020 (without linenumber)
[16:59] <yago> then the cart should speak
[16:59] <yago> then fiddle with the parameters for poke2,X and a=usr(Y)
[17:03] <yago> then poke 2,65
[17:03] <yago> return
[17:03] <yago> then sys39000
[17:03] <yago> return
[17:03] <yago> then a=usr(50)+usr(0)+usr(0)
[17:03] <yago> RUN

And it works ! Thanks for this help



Demo audio du synthé vocal version c64

Image

et hop ! :D

http://stamba.free.fr/c64/Techni_Musiqu ... stamba.mp3


ps : Je lance un appel a quiconque aurait des infos sur cette interface ! En effet je n'arrive pas a lancer les autres programmes qui sont sur la disquette.. seul le demo.prg se lance ! HELP !
Modifié en dernier par stamba le 30 avr. 2008 15:12, modifié 1 fois.
Avatar du membre
redrum76
Fonctionne à 1200 bauds
Fonctionne à 1200 bauds
Messages : 847
Enregistré le : 14 déc. 2006 10:48
Localisation : Charente-Maritime
Contact :

Message par redrum76 »

excellent !
Image
Avatar du membre
6503
Fonctionne à 300 bauds
Fonctionne à 300 bauds
Messages : 134
Enregistré le : 23 févr. 2007 07:42
Localisation : Angers
Contact :

Message par 6503 »

Y avait une interface qui etait tres semblable sur oric...
olivierm8
Fonctionne à 75 bauds
Fonctionne à 75 bauds
Messages : 11
Enregistré le : 06 mars 2007 16:21
Localisation : 77200

Message par olivierm8 »

Ah j'avais oubliez qu'il chantais ...
Bon ben si tu pouvais envoyer le programme sur le net je regarderais bien comment c'etais dedans.
Avatar du membre
stamba
Fonctionne à 1200 bauds
Fonctionne à 1200 bauds
Messages : 705
Enregistré le : 29 mars 2005 11:30
Localisation : Bordeaux - FRANCE
Contact :

Message par stamba »

Olivierm8 : yep ! des que jaurais reussi a faire marcher mon cable xe1541 ! ;)
Avatar du membre
SEBZ.G
Fonctionne à 75 bauds
Fonctionne à 75 bauds
Messages : 19
Enregistré le : 27 avr. 2008 13:36
Localisation : 71CHALON

Message par SEBZ.G »

terrible, c'est pas ce que je croyais...

On dirait que les Phonèmes ont été samplés (echantillonnés), ce qui donne une impression de vocodeur (un traitement d'effet vocal très utilisé par Kraftwerk, Daft-Punk, et bien d'autres...)
G point Corp.
Avatar du membre
SbM
Fonctionne à 9600 bauds
Fonctionne à 9600 bauds
Messages : 4609
Enregistré le : 24 nov. 2004 21:49
Localisation : SQY (78), France
Contact :

Message par SbM »

SEBZ.G a écrit : On dirait que les Phonèmes ont été samplés (echantillonnés)
C'est la technique classique pour ce genre de choses.
http://sbm.ordinotheque.free.fr | http://www.mo5.com
"Un bon disque dur est un disque dur mort." (Général Cluster)
Avatar du membre
stamba
Fonctionne à 1200 bauds
Fonctionne à 1200 bauds
Messages : 705
Enregistré le : 29 mars 2005 11:30
Localisation : Bordeaux - FRANCE
Contact :

Message par stamba »

Les différents modes de synthèse vocale :

Concatenative synthesis
Concatenative synthesis is based on the concatenation (or stringing together) of segments of recorded speech. Generally, concatenative synthesis produces the most natural-sounding synthesized speech. However, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. There are three main sub-types of concatenative synthesis.

Unit selection synthesis
Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual phones, syllables, morphemes, words, phrases, and sentences. Typically, the division into segments is done using a specially modified speech recognizer set to a "forced alignment" mode with some manual correction afterward, using visual representations such as the waveform and spectrogram.[11] An index of the units in the speech database is then created based on the segmentation and acoustic parameters like the fundamental frequency (pitch), duration, position in the syllable, and neighboring phones. At runtime, the desired target utterance is created by determining the best chain of candidate units from the database (unit selection). This process is typically achieved using a specially weighted decision tree.

Unit selection provides the greatest naturalness, because it applies only a small amount of digital signal processing (DSP) to the recorded speech. DSP often makes recorded speech sound less natural, although some systems use a small amount of signal processing at the point of concatenation to smooth the waveform. The output from the best unit-selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, maximum naturalness typically require unit-selection speech databases to be very large, in some systems ranging into the gigabytes of recorded data, representing dozens of hours of speech.[12] Also, unit selection algorithms have been known to select segments from a place that results in less than ideal synthesis (e.g. minor words become unclear) even when a better choice exists in the database. [13]

Diphone synthesis
Diphone synthesis uses a minimal speech database containing all the diphones (sound-to-sound transitions) occurring in a language. The number of diphones depends on the phonotactics of the language: for example, Spanish has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the target prosody of a sentence is superimposed on these minimal units by means of digital signal processing techniques such as linear predictive coding, PSOLA or MBROLA.[14] The quality of the resulting speech is generally worse than that of unit-selection systems, but more natural-sounding than the output of formant synthesizers. Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining, although it continues to be used in research because there are a number of freely available software implementations.

Domain-specific synthesis
Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports. [15] The technology is very simple to implement, and has been in commercial use for a long time, in devices like talking clocks and calculators. The level of naturalness of these systems can be very high because the variety of sentence types is limited, and they closely match the prosody and intonation of the original recordings.[citation needed]

Because these systems are limited by the words and phrases in their databases, they are not general-purpose and can only synthesize the combinations of words and phrases with which they have been preprogrammed. The blending of words within naturally spoken language however can still cause problems unless the many variations are taken into account. For example, in non-rhotic dialects of English the <r> in words like <clear> /ˈkliːə/ is usually only pronounced when the following word has a vowel as its first letter (e.g. <clear out> is realized as /ˌkliːəɹˈɑʊt/). Likewise in French, many final consonants become no longer silent if followed by a word that begins with a vowel, an effect called liaison. This alternation cannot be reproduced by a simple word-concatenation system, which would require additional complexity to be context-sensitive.

Formant synthesis
Formant synthesis does not use human speech samples at runtime. Instead, the synthesized speech output is created using an acoustic model. Parameters such as fundamental frequency, voicing, and noise levels are varied over time to create a waveform of artificial speech. This method is sometimes called rules-based synthesis; however, many concatenative systems also have rules-based components.

Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech. However, maximum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have advantages over concatenative systems. Formant-synthesized speech can be reliably intelligible, even at very high speeds, avoiding the acoustic glitches that commonly plague concatenative systems. High-speed synthesized speech is used by the visually impaired to quickly navigate computers using a screen reader. Formant synthesizers are usually smaller programs than concatenative systems because they do not have a database of speech samples. They can therefore be used in embedded systems, where memory and microprocessor power are especially limited. Because formant-based systems have complete control of all aspects of the output speech, a wide variety of prosodies and intonations can be output, conveying not just questions and statements, but a variety of emotions and tones of voice.

Examples of non-real-time but highly accurate intonation control in formant synthesis include the work done in the late 1970s for the Texas Instruments toy Speak & Spell, and in the early 1980s Sega arcade machines.[16] Creating proper intonation for these projects was painstaking, and the results have yet to be matched by real-time text-to-speech interfaces. [17]

Articulatory synthesis
Articulatory synthesis refers to computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there. The first articulatory synthesizer regularly used for laboratory experiments was developed at Haskins Laboratories in the mid-1970s by Philip Rubin, Tom Baer, and Paul Mermelstein. This synthesizer, known as ASY, was based on vocal tract models developed at Bell Laboratories in the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues.

Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. A notable exception is the NeXT-based system originally developed and marketed by Trillium Sound Research, a spin-off company of the University of Calgary, where much of the original research was conducted. Following the demise of the various incarnations of NeXT (started by Steve Jobs in the late 1980s and merged with Apple Computer in 1997), the Trillium software was published under the GNU General Public License, with work continuing as gnuspeech. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's "distinctive region model".

HMM-based synthesis
HMM-based synthesis is a synthesis method based on hidden Markov models. In this system, the frequency spectrum (vocal tract), fundamental frequency (vocal source), and duration (prosody) of speech are modeled simultaneously by HMMs. Speech waveforms are generated from HMMs themselves based on the maximum likelihood criterion.[18]


Sinewave synthesis
Sinewave synthesis is a technique for synthesizing speech by replacing the formants (main bands of energy) with pure tone whistles. [19]
Avatar du membre
coimbrap
Fonctionne à 9600 bauds
Fonctionne à 9600 bauds
Messages : 4090
Enregistré le : 04 juil. 2002 14:42
Localisation : Nantes
Contact :

Message par coimbrap »

Génial...
J'aime aussi l'accent alsacien :wink:

Ceci dit, beau boulot... faudrait tester la version CPC.
Avatar du membre
yvesffr
Fonctionne à 2400 bauds
Fonctionne à 2400 bauds
Messages : 2127
Enregistré le : 03 juin 2002 22:07
Localisation : 77
Contact :

Message par yvesffr »

Je pensais que c'était du breton :roll:
"Je vous aime" (© Pocket 1969)
"et moi je suis la vierge marie" (© Stamba 2009)
"Resistance is futile (if < 1 Ohm)"
"Un velux est un linux portugais"
"j'en vois encore un bout, yves" (© 2010 SbM)
"In minitel we trust" - Silicium
Avatar du membre
stamba
Fonctionne à 1200 bauds
Fonctionne à 1200 bauds
Messages : 705
Enregistré le : 29 mars 2005 11:30
Localisation : Bordeaux - FRANCE
Contact :

Message par stamba »

hop.. j'ai mis a jour avec ca :

Edit du 30/04/2008 !
Voila quelques photos :

Image
Image
Image
Image

et les quelques infos que j'ai mis sur mon blog au sujet des periph audio/midi sur c64 : http://www.deviationsociale.com/synths/

Here are the 3 files i’ve been able to backup from the floppy which came with the unit :
http://stamba.free.fr/c64/tm/Techni_Mus ... o_disk.rar

It seems these files are only the demo disk but a Yago from #c-64 on ircnet helped me to understand what the program do and what could be typed to input datas to the vocal synth.



It uses phonemes and I don’t have the phonemes list so it will be quite hard to find them out !
Have to check the list shown in the demo.. maybe it’s the phonemes list !

Here is what you have to do while running the demo to input your own datas :
[16:57] <yago> well, poke stores a value into memory, and sys and usr call machinelanguage programs
[16:59] <yago> after then program started, stop it, and try to enter e.g. line 24020 (without linenumber)
[16:59] <yago> then the cart should speak
[16:59] <yago> then fiddle with the parameters for poke2,X and a=usr(Y)
[17:03] <yago> then poke 2,65
[17:03] <yago> return
[17:03] <yago> then sys39000
[17:03] <yago> return
[17:03] <yago> then a=usr(50)+usr(0)+usr(0)
[17:03] <yago> RUN

And it works ! Thanks for this help
Répondre

Retourner vers « Commodore 8bits »