Batch Forum Index
RegisterSearchFAQMemberlistUsergroupsLog in
Reply to topic Page 1 of 1
[C] Neural Network
Author Message
Reply with quote
Post [C] Neural Network 
Bonjour à tous Mr. Green

L'intelligence artificielle prend de plus en plus de place dans notre quotidien, que ce soit des suggestions de vidéos YouTube, de la reconnaissance faciale, du tracking, et bien d'autres choses encore.
Je n'expliquerai pas comment cela fonctionne, il existe déjà beaucoup de tutoriels (mais j'en ferai peut-être un en JavaScript Very Happy )
J'ai créé une simple commande externe qui permet de créer, d’entraîner et d'évaluer des réseaux des neurones Mr. Green



  • Cette commande peut être utiliser de 2 manières différentes, soit de la manière classique comme ceci :
    Code:
    neuralNetwork /COMMAND_A <ARG_1> <ARG_2> ... /COMMAND_B <ARG_1> <ARG_2> ...  ...

    ou en pipant les commandes comme ceci :
    Code:
    (
    echo;/COMMAND_A <ARG_1> <ARG_2> ...
    echo;/COMMAND_B <ARG_1> <ARG_2> ...
    rem ...
    echo;/exit
    ) | NeuralNetwork


    Attention à bien mettre le "echo;/exit" pour terminer la pipe correctement

    Vous pouvez aussi utiliser cette astuce : https://batch.xoo.it/t5529-Redirection-retard-e-d-un-pipe-avec-un-call-labe…
    Ce qui peut donner ceci :
    Code:
    @echo off

    if "%~1"==":main" goto %~1
    "%~0" :main | neuralNetwork
    pause>nul&exit

    :main

    echo;/COMMAND_A <ARG_1> <ARG_2> ...
    echo;/COMMAND_B <ARG_1> <ARG_2> ...
    rem ...


    (remarque le "echo;/exit" n'est plus nécessaire dans cette configuration)

    Bien évidemment pour des soucis d’esthétiques, "l'affichage" est désactivée par défaut, par conséquent si vous souhaitez afficher du texte, vous pouvez procéder de 2 manières différentes :

    (1ère manière)
    Code:
    @echo off

    if "%~1"==":main" goto %~1
    %~0 :main | neuralNetwork
    pause>nul&exit

    :main

    echo;/show
    echo;bonjour à tous
    echo;/hide


    echo;/COMMAND_A <ARG_1> <ARG_2> ...
    echo;/COMMAND_B <ARG_1> <ARG_2> ...
    rem ...


    echo;/show
    echo;fin du programme
    echo;/hide



    (2ème manière)
    Code:
    @echo off

    if "%~1"==":main" goto %~1
    "%~0" :main | neuralNetwork
    pause>nul&exit

    :main

    echo;/print "bonjour à tous"

    echo;/COMMAND_A <ARG_1> <ARG_2> ...
    echo;/COMMAND_B <ARG_1> <ARG_2> ...
    rem ...

    echo;/print "fin du programme"



    A vous de voir la syntaxe que vous préférez (notez aussi que chaque syntaxe est différente et apporte des avantages et des inconvénients qui lui sont propres)
    Je vous renvoie aussi vers la documentation pour plus de détails et je vous laisse aussi un peu tester la commande pour voir un peu comment elle fonctionne Mr. Green
    (la fonctionnalité assez cool est de pouvoir exporter et importer des réseaux de neurones, ce qui vous permet par exemple de faire des sessions de training chaque jour (parce que laisser un pc allumer pendant 4 semaines, 7 mois....), de pouvoir sauvegarder vos réseaux de neurones une fois bien entraînés)

    Il est assez difficile d'entraîner des réseaux de neurones soit même (c'est assez facile par exemple lorsqu'on fait des XOR Gate Laughing ) mais pour le reste ce n'est pas forcément trivial.
    Vous pouvez utiliser différentes techniques comme le Q-Learning, les Algorithmes Génétiques.....


  • Code:
    [lang=c][download=NeuralNetwork.c]/*********************************************************************************\
    * Copyright (c) 2020 Flammrock                                                    *
    *                                                                                 *
    * Permission is hereby granted, free of charge, to any person obtaining a copy    *
    * of this software and associated documentation files (the "Software"), to deal   *
    * in the Software without restriction, including without limitation the rights    *
    * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell       *
    * copies of the Software, and to permit persons to whom the Software is           *
    * furnished to do so, subject to the following conditions:                        *
    *                                                                                 *
    * The above copyright notice and this permission notice shall be included in all  *
    * copies or substantial portions of the Software.                                 *
    *                                                                                 *
    * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR      *
    * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,        *
    * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE     *
    * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER          *
    * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,   *
    * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE   *
    * SOFTWARE.                                                                       *
    \*********************************************************************************/





    /**
    * [Documentation]
    *
    * Usage:
    *
    *   /create <name> --structure <layer>...
    *   /delete <name>
    *   /rename <name> <new_name> [--noerr]
    *   /export <name> [--out <file>] [--append]
    *   /import <name> <new_name> (--file <file> | --string <string>)
    *   /evaluate <name> --input <double>... [--out <file>] [--normalize (<min> <max>)...] [--append]
    *   /train <name> --input <double>... --output <double>... [[--normalize (<min> <max>)...] | [--normalize-input (<min> <max>)...] [--normalize-output (<min> <max>)...]] [--loop <int>]
    *   /merge <parent_name_1> <parent_name_2> <child_name> [--mixing-power <int>]
    *   /mutate <name> [--rate <double>]
    *   /getdna <name> [--out <file>] [--append] [--maxlengthline <int>]
    *   /print <string>
    *   /printn <string>
    *   /enable [<int>]
    *   /disable [<int>]
    *   /show [<int>]
    *   /hide [<int>]
    *   /about
    *   /exit
    *   /help [<command_name>]
    *
    *
    * Commands Description:
    *
    *   "/create" - This command create a neural network.
    *        <name>              : Just the name of the neural network, this name is used as a unique ID.
    *              If a neural network already have this name, an error is returned. (failed to rename)
    *        --structure <layer> : Allows to define the structure of the neural network.
    *              This first integer is for the "input layer" while the last integer is for the "output layer"
    *              sample: --structure 2 5 1 (That create a neural network with two neurons in input layer, five
    *              neurons in hidden layer and 1 neuron in output layer
    *
    *   "/delete" - This command delete a neural network.
    *        <name> : Delete the neural network which have this name. If no neural networks have this name, no error is returned.
    *
    *   "/rename" - This command rename a neural network
    *        <name>     : the old name
    *        <new_name> : the new name. If a neural network already have this name, an error is returned. (failed to rename)
    *       Optional:
    *        --noerr : if the neural network <name> not exist or if the neural network <new_name>
    *                already exist, no error is returned if this parameter is specified
    *
    *   "/export" - This command export a neural network to a file or print it in the console
    *        <name>       : name of the neural network
    *       Optional:
    *        --out <file> : write the neural network into a file
    *        --append     : if specified, the neural network is appended to the file (need the previous argument)
    *
    *   "/import" - This command imports a neural network that was exported before
    *        <name>            : name of the neural network to import
    *        <new_name>        : new name of the neural network (can be the same as the argument "<name>")
    *        --file <file>     : file where the neural network is located
    *        --string <string> : representation of a neural network
    *
    *   "/evaluate" - This command evaluate a neural network with a specified input
    *        <name>           : name of the neural network
    *        --input <double>... : the specified input must fit the first layer of the neural network.
    *              if the structure of the neural network is like : 5 8 8 9 3 2
    *              we need to pass an input of 5 numbers like that : 0.26 0.56 0.89 0.42 0.12
    *       Optional:
    *        --out <file>            : write the result to a file (if not specified, the result is print in the console)
    *        --normalize <min> <max> : normalize each double of the "--input" parameter between <min> and <max>
    *
    *   "/train" - This command train a neural network with inputs data and outputs data
    *        <name> : name of the neural network
    *        --input <double>... : the specified input must fit the first layer of the neural network.
    *              if the structure of the neural network is like : 5 8 8 9 3 2
    *              we need to pass an input of 5 numbers like that : 0.26 0.56 0.89 0.42 0.12
    *        --output <double>... : the specified output must fit the last layer of the neural network.
    *              if the structure of the neural network is like : 5 8 8 9 3 2
    *              we need to pass an output of 2 numbers like that : 0.12 0.86
    *
    *         But for accurate training, we can pass more than 5 numbers or 2 numbers, but for a better understanding of that,
    *         the XOR example will be nice to study :)
    *
    *       Optional:
    *        --loop <int>            : number of iterations
    *                        (the neural network learn very slowly because the global weights and bias follow the gradient of partials derivatives)
    *           default: 10000
    *        --normalize (<min> <max>)... : normalize each double of the "--input" and "--output" parameters between <min> and <max>
    *        --normalize-input (<min> <max>)... : same as --normalize but normalize only the input
    *        --normalize-output (<min> <max>)... : same as --normalize but normalize only the output
    *
    *   "/merge" - This command merge two neurals networks (with same structure) into one neural network
    *        <parent_name_1>       : name of a neural network
    *        <parent_name_2>       : name of a neural network
    *        <child>               : name of the child neural network
    *       Optional:
    *        --mixing-power <int>  : the strength of mixing (not proportional to the size of neurals networks)
    *              so more neural networks are big, more the mixing power must be big to "preserve" the same mixing as little neural networks
    *           default: 15
    *
    *   "/mutate" - This command mutate a neural network (some weights and some bias are randomly mutate)
    *        <name>          : name of the neural network
    *       Optional:
    *        --rate <double> : must be between 0 and 1. And must be very small too like "0.002" (not proportional to the size of neurals networks)
    *              so more neural networks are big, more the rate must be big to "preserve" the same rate as little neural networks
    *           default: 0.001
    *
    *   "/getdna" - This command write the dna of a neural network in a file or print it in the console
    *        <name> : name of the neural network
    *       Optional:
    *        --out <file>          : write the dna to a file (if not specified, the dna is print in the console)
    *        --maxlengthline <int> : max line length (-1 for infini), add '\n' every --maxlengthline character(s)
    *           default: -1
    *
    *         This command was just made for fun and this command will actually be useless.
    *
    *
    *   "/print" - This command print a string
    *        <string> : a simple string
    *
    *   "/printn" - This command print a string without new line character
    *        <string> : a simple string
    *
    *   "/enable" - This command enable the engine that execute command (ENABLED BY DEFAULT)
    *       Optional:
    *        <int> : if the engine was disabled with an optional argument,
    *              this argument must be specified and be the same when
    *              engine was disabled else the engine isn't enabled.
    *
    *   "/disable" - This command disable the engine that execute command
    *       Optional:
    *        <int> : this disable the engine with a "key" and others commands
    *              like "enable", "show" and "hide" must be have the same argument to work
    *
    *   "/show" - This command enable the display (only for piped commands)
    *       Optional:
    *        <int> : if the engine was disabled with an optional argument,
    *              this argument must be specified and be the same when
    *              engine was disabled else the display isn't enabled.
    *
    *   "/hide" - This command disable the display (only for piped commands)
    *       Optional:
    *        <int> : if the engine was disabled with an optional argument,
    *              this argument must be specified and be the same when
    *              engine was disabled else the display isn't disabled.
    *
    *
    *
    *
    * Example:
    *
    *   In this simple example, we create a neuronal network to compute a XOR logic gate.
    *   First, we need to create the neural network :
    *
    *         /create "XOR Gate" --structure 2 10 1
    *
    *   A XOR Gate work like that :
    *
    *         0 0 => 0
    *         1 0 => 1
    *         0 1 => 1
    *         1 1 => 0
    *
    *  So we need an input layer of 2 neurons and an output layer of 1 neuron.
    *  To compute the XOR Gate, we need a hidden layer too because we can't compute XOR Gate with a simple perceptron.
    *
    *  So now, we need to train our neural network like that :
    *
    *         /train "XOR Gate" --input 0 0 1 0 0 1 1 1 --output 0 1 1 0 --loop 10000
    *
    *  This syntax is equivalent to :
    *
    *         /train "XOR Gate" --input 0 0 --output 0 --loop 10000
    *         /train "XOR Gate" --input 1 0 --output 1 --loop 10000
    *         /train "XOR Gate" --input 0 1 --output 1 --loop 10000
    *         /train "XOR Gate" --input 1 1 --output 0 --loop 10000
    *
    *  It's recommanded to use this syntax : /train "XOR Gate" --input 0 0 1 0 0 1 1 1 --output 0 1 1 0 --loop 10000
    *  Because this syntax work a little bit different than the other and this syntax is more accurate and more performant.
    *
    *  Then, we can test our neural network like that :
    *
    *         /evaluate "XOR Gate" --input 1 0
    *
    *  This command write result in the stdout, so we can for instance get the result in a file like that :
    *
    *         /evaluate "XOR Gate" --input 1 0 > "my result.txt"         (Batch langage)
    *
    *  If the command is piped (the result can't be redirect to a file), we can also specified the --out parameter like that :
    *
    *         /evaluate "XOR Gate" --input 1 0 --out "my result.txt"
    *
    *  After some manipulations, we can save our neural network by doing that :
    *
    *         /export "XOR Gate" > "my neural network xor gate.txt"
    *     OR:
    *         /export "XOR Gate" -out "my neural network xor gate.txt"
    *
    *  We can import our neural network later :
    *
    *         /import "XOR Gate" --file "my neural network xor gate.txt"
    *
    *  Just for the fun, we can print his dna by doing that :
    *
    *         /getdna "XOR Gate" > "xor_dna.txt"
    *     OR:
    *         /getdna "XOR Gate" --out "xor_dna.txt"
    *
    *  With the command "/merge" and "/mutate", we can easly create a genetic algorithm
    *
    **/





    /**********************\
    *                      *
    *       INCLUDE        *
    *                      *
    \**********************/

    #include <stdio.h>
    #include <stdlib.h>
    #include <ctype.h>
    #include <io.h>
    #include <math.h>
    #include <time.h>
    #include <string.h>





    /**********************\
    *                      *
    *        MACRO         *
    *                      *
    \**********************/

    #define LEN(arr) ((int) (sizeof (arr) / sizeof (arr)[0]))
    #define _STR(arr) (char*)arr,LEN(arr)
    #define _1D(arr) (int*)arr,LEN(arr)
    #define _2D(arr) (int*)arr,LEN(arr),LEN(arr[0])





    /**********************\
    *                      *
    *       HELPER         *
    *                      *
    \**********************/

    void bad_argument() {
        fprintf(stderr,"Err: Bad arguments or Missing arguments, please see the documentation for more details\n");
        exit(1);
    }
    void bad_memory_allocation() {
        fprintf(stderr,"Err: unsuccessful allocation\n");
        exit(2);
    }
    void putcharline(char c, char **line, int *index, int *size) {
        char *temp = NULL;
        if (*index+1 >= *size) {
            (*size) *= 2;
            temp = (char *)realloc(*line, (*size+1) * sizeof(char));
            if (temp == NULL) bad_memory_allocation();
            *line = temp;
        }
        (*line)[*index] = c;
        (*line)[*index+1] = '\0';
        (*index)++;
    }
    char * trygetdata(char v, char *data, int *index, int length) {
        int tempsize = 2;
        char *tempdata = (char *)malloc(tempsize * sizeof(char));
        if (tempdata == NULL) bad_memory_allocation();
        int tempindex = 0;
        while (data[*index] != v && *index < length) {
            putcharline(data[*index], &tempdata, &tempindex, &tempsize);
            (*index)++;
        }
        if (*index > length) {
            free(tempdata);
            return NULL;
        }
        (*index)++;
        return tempdata;
    }
    char * trygetdatan(int n, char *data, int *index, int length) {
        int tempsize = 2;
        char *tempdata = (char *)malloc(tempsize * sizeof(char));
        if (tempdata == NULL) bad_memory_allocation();
        int tempindex = 0;
        int i = 0;
        while (i < n && *index < length) {
            putcharline(data[*index], &tempdata, &tempindex, &tempsize);
            (*index)++;
            i++;
        }
        if (*index > length) {
            free(tempdata);
            return NULL;
        }
        return tempdata;
    }
    void nextline(FILE *f) {
        int c = fgetc(f);
        while (c != EOF && c != '\n') {c = fgetc(f);}
    }
    int isNumeric(char *s) {
        if (s == NULL || *s == '\0' || isspace(*s)) return 0;
        char * p;
        strtod(s, &p);
        return *p == '\0';
    }
    char * doubleToBinary(double d) {
        char *r = (char *)malloc(sizeof(d)*8+1);
        if (r == NULL) bad_memory_allocation();
        unsigned char temp[sizeof(d)];
        memcpy(&temp, &d, sizeof(d));
        r[sizeof(d)*8] = '\0';
        for (int i = sizeof(d)*8-1; i >= 0; i--) {
            r[sizeof(d)*8-1-i] = (temp[i/8] & (1 << (i%8) )) ? '1' : '0';
        }
        return r;
    }
    double binaryToDouble(char* d) {
        double r;
        unsigned long long x = 0;
        for (; *d; ++d) {
            x = (x << 1) + (*d - '0');
        }
        memcpy(&r, &x, sizeof(r));
        return r;
    }
    char *ltrim(char *s) {
        while(isspace(*s)) s++;
        return s;
    }
    char *rtrim(char *s) {
        char* back = s + strlen(s);
        while(isspace(*--back));
        *(back+1) = '\0';
        return s;
    }
    char *trim(char *s) {
        return rtrim(ltrim(s));
    }





    /**********************\
    *                      *
    *       STRUCT         *
    *                      *
    \**********************/

    // MATRIX
    typedef struct matrix matrix;
    struct matrix {
        int rows;
        int cols;
        double **data;
    };

    // NEURAL NETWORK
    typedef struct neural_network neural_network;
    struct neural_network {
       
        // weights
        matrix *weights;
        int size;
       
        // bias
        matrix *bias;
       
        // used for backpropagation
        matrix *tempforward;
       
    };

    // NEURAL NETWORK OBJECT
    typedef struct neural_network_object neural_network_object;
    struct neural_network_object {
        char *name;
        neural_network nn;
    };

    // NEURAL NETWORK OBJECT (array)
    typedef struct neural_network_objects neural_network_objects;
    struct neural_network_objects {
        neural_network_object* nns;
        int length;
        int capacity;
    };





    /**********************\
    *                      *
    *       GLOBAL         *
    *                      *
    \**********************/

    // MATRIX
    int __precision = 8;
    double __scale = (double)1.0;

    // NEURAL NETWORK
    ///////// ACTIVATION FUNCTION /////////
    double sigmoid(double x) {
        return (double)1 / ((double)1 + exp(-x));
    }
    double sigmoid_deriv(double x) {
        return x * ( (double)1 - x);
    }
    ///////////////////////////////////////

    // NEURAL NETWORK OBJECT (array)
    neural_network_objects obj;

    // DISPLAY COMMAND (only for pipe)
    int display = 0;

    // ENGINE
    int engine_enabled = 1;
    int engine_check = 0;
    int engine_use_check = 0;





    /**********************\
    *                      *
    *       MATRIX         *
    *                      *
    \**********************/

    int count_digit(int n) {
        if (n < 0) n*= -10;
        return (int)floor(log10(n))+1;
    }
    void matrix_free(matrix m) {
        int i;
        for (i = 0; i < m.rows; i++){ 
            free(m.data[i]);
        }
        free(m.data);
    }
    matrix matrix_create(int rows, int cols) {
       
        int i,j;
       
        double **data = (double **)malloc(rows * sizeof(double *));
        if (data == NULL) bad_memory_allocation();
        for (i = 0; i < rows; i++) {
            data[i] = (double *)malloc(cols * sizeof(double));
            if (data[i] == NULL) bad_memory_allocation();
        }
     

        for (i = 0; i < rows; i++) {
            for (j = 0; j < cols; j++) {
                data[i][j] = (double)0;
            }
        }

        matrix m = { rows, cols, data };
       
        return m;
    }
    matrix matrix_create_from_2Darray(double *arr, int rows, int cols) {
       
        int i,j;
       
        double **data = (double **)malloc(rows * sizeof(double *));
        if (data == NULL) bad_memory_allocation();
        for (i = 0; i < rows; i++) {
            data[i] = (double *)malloc(cols * sizeof(double));
            if (data[i] == NULL) bad_memory_allocation();
        }
     

        for (i = 0; i < rows; i++) {
            for (j = 0; j < cols; j++) {
                data[i][j] = (double)*((arr+i*cols)+j);
            }
        }

        matrix m = { rows, cols, data };
       
        return m;
    }
    matrix matrix_create_from_array(double *arr, int cols) {
       
        int i,j;
       
        int rows = 1;
       
        double **data = (double **)malloc(rows * sizeof(double *));
        if (data == NULL) bad_memory_allocation();
        for (i = 0; i < rows; i++) {
            data[i] = (double *)malloc(cols * sizeof(double));
            if (data[i] == NULL) bad_memory_allocation();
        }
     


            for (j = 0; j < cols; j++) {
                data[0][j] = (double)(arr[j]);
            }
       

        matrix m = { rows, cols, data };
       
        return m;
    }
    double matrix_getmax(char t,matrix m) {
        int i,j;
        double max = m.data[0][0];
        for (i = 0; i < m.rows; i++) {
            for (j = 0; j < m.cols; j++) {
                if (t == '+') {
                    if (m.data[i][j] > max) {
                        max = m.data[i][j];
                    }
                } else {
                    if (m.data[i][j] < max) {
                        max = m.data[i][j];
                    }
                }
            }
        }
        return max;
    }
    void matrix_random_fill(matrix m, double min, double max) {
        int i,j;
        for (i = 0; i < m.rows; i++) {
            for (j = 0; j < m.cols; j++) {
                m.data[i][j] = ((double)rand()/(double)RAND_MAX) * (max - min) + min;
            }
        }
    }
    void matrix_fprint(matrix m, FILE *f) {
        int i,j;
        int dmax = count_digit((int)matrix_getmax('+',m));
        int dmin = count_digit((int)matrix_getmax('-',m));
        int d;
        if (dmax > dmin) {
            d = dmax;
        } else {
            d = dmin;
        }
        for (i = 0; i < m.rows; i++) {
            for (j = 0; j < m.cols; j++) {
                int dt = count_digit(m.data[i][j]);
                for (int k = 0; k < d-dt; k++) {
                    printf(" ");
                }
                fprintf(f,"%.*lf ", __precision, m.data[i][j] * __scale);
            }
            fprintf(f,"\n");
        }
    }
    void matrix_print(matrix m) {
        int i,j;
        int dmax = count_digit((int)matrix_getmax('+',m));
        int dmin = count_digit((int)matrix_getmax('-',m));
        int d;
        if (dmax > dmin) {
            d = dmax;
        } else {
            d = dmin;
        }
        for (i = 0; i < m.rows; i++) {
            for (j = 0; j < m.cols; j++) {
                int dt = count_digit(m.data[i][j]);
                for (int k = 0; k < d-dt; k++) {
                    printf(" ");
                }
                printf("%.*lf ", __precision, m.data[i][j] * __scale);
            }
            printf("\n");
        }
    }
    matrix matrix_tr(char tr, matrix m0) {
        int i,j;
        matrix m;
        if (tr == 't') {
            m = matrix_create(m0.cols, m0.rows);
        } else {
            m = matrix_create(m0.rows, m0.cols);
        }
        for (i = 0; i < m.rows; i++) {
            for (j = 0; j < m.cols; j++) {
                if (tr == 't') {
                    m.data[i][j] = m0.data[j][i];
                } else {
                    m.data[i][j] = m0.data[i][j];
                }
            }
        }
        return m;
    }
    matrix matrix_map(matrix m0, double (*fun_ptr)(double)) {
        int i,j;
        matrix m = matrix_create(m0.rows, m0.cols);
        for (i = 0; i < m.rows; i++) {
            for (j = 0; j < m.cols; j++) {
                m.data[i][j] = (*fun_ptr)(m0.data[i][j]);
            }
        }
        return m;
    }
    matrix matrix_op(char op, matrix m0, matrix m1) {
        int i,j;
        matrix m;
        if (op == '.') {
            if (m0.cols != m1.rows) {
                fprintf(stderr, "Err: Bad matrices dimensions dot\n");
                exit(1);
            }
            m = matrix_create(m0.rows, m1.cols);
        } else {
            if (m0.cols != m1.cols || m0.rows != m1.rows) {
                fprintf(stderr, "Err: Bad matrices dimensions '%c'\n",op);
                exit(1);
            }
            m = matrix_create(m0.rows, m0.cols);
        }
        for (i = 0; i < m.rows; i++) {
            for (j = 0; j < m.cols; j++) {
                if (op == '+') {
                   m.data[i][j] = m0.data[i][j] + m1.data[i][j];
                } else if (op == '-') {
                    m.data[i][j] = m0.data[i][j] - m1.data[i][j];
                } else if (op == '*') {
                    m.data[i][j] = m0.data[i][j] * m1.data[i][j];
                } else if (op == '.') {
                    double sum = (double)0;
                    for (int k = 0; k < m0.cols; k++) {
                        sum += m0.data[i][k] * m1.data[k][j];
                    }
                    m.data[i][j] = sum;
                }
               
            }
        }
        return m;
    }
    void matrix_print_inline(matrix m, FILE *f, int binary) {
        int i,j;
        if (f == NULL) {
            printf("%d-%d-",m.rows,m.cols);
        } else {
            if (binary == 1) {
                fwrite(&m.rows, sizeof(m.rows), 1, f);
                fwrite(&m.cols, sizeof(m.cols), 1, f);
            } else {
                fprintf(f,"%d-%d-",m.rows,m.cols);
            }
        }
        for (i = 0; i < m.rows; i++) {

                for (j = 0; j < m.cols; j++) {
                    if (f == NULL) {
                        printf("%.17f\n", m.data[i][j]);
                    } else {
                        if (binary == 1) {
                            fwrite(&m.data[i][j], sizeof(double), 1, f);
                        } else {
                            fprintf(f,"%.17f", m.data[i][j]);
                        }
                    }
                }
           
        }
    }





    /**********************\
    *                      *
    *    NEURAL NETWORK    *
    *                      *
    \**********************/

    neural_network neural_network_create(int *arr, int length) {
       
        matrix *weights = (matrix *)malloc((length-1) * sizeof(matrix));
        if (weights == NULL) bad_memory_allocation();
        matrix *bias = (matrix *)malloc((length-1) * sizeof(matrix));
        if (bias == NULL) bad_memory_allocation();
        matrix *temp = (matrix *)malloc(length * sizeof(matrix));
        if (temp == NULL) bad_memory_allocation();
       
       
        for (int i = 0; i < length-1; i++) {
            weights[i] = matrix_create(arr[i],arr[i+1]);
            bias[i] = matrix_create(1,arr[i+1]);
            matrix_random_fill(weights[i],-1,1);
            matrix_random_fill(bias[i],-1,1);
            temp[i] = matrix_create(1,1);
            matrix_random_fill(temp[i],-1,1);
        }
        temp[length-1] = matrix_create(1,1);
       
        neural_network nn = { weights, length-1, bias, temp };
       
        return nn;
       
    }
    void neural_network_print(neural_network nn) {
        for (int i = 0; i < nn.size; i++) {
            printf("weight %d:\n",i);
            matrix_print(nn.weights[i]);
            printf("-------------------------------------\n");
        }
    }
    matrix neural_network_forward(neural_network nn, double *arr, int length) {
        matrix input = matrix_create_from_array(arr, length);
        matrix_free(nn.tempforward[0]);
        nn.tempforward[0] = matrix_tr('c',input);
        double (*fun_ptr)(double) = &sigmoid;
        matrix temp = matrix_create(1,1);
        for (int i = 0; i < nn.size; i++) {
            matrix_free(nn.tempforward[i+1]);
            if (i==0) {
                matrix _dot = matrix_op('.',input, nn.weights[i]);
                matrix _add = matrix_op('+',_dot,nn.bias[i]);
                matrix_free(temp);
                temp = matrix_map(_add,fun_ptr);
                nn.tempforward[i+1] = matrix_tr('c',temp);
                matrix_free(_add);
                matrix_free(_dot);
            } else {
                matrix _dot = matrix_op('.',temp, nn.weights[i]);
                matrix _add = matrix_op('+',_dot,nn.bias[i]);
                matrix_free(temp);
                temp = matrix_map(_add,fun_ptr);
                nn.tempforward[i+1] = matrix_tr('c',temp);
                matrix_free(_add);
                matrix_free(_dot);
            }
        }
        matrix_free(input);
        return temp;
    }
    void neural_network_train(neural_network nn, double *arrinput, int lengthinput, double *arrtarget, int lengthtarget) {
        double (*fun_ptr)(double) = &sigmoid_deriv;
        matrix target = matrix_create_from_array(arrtarget, lengthtarget);
        matrix output = neural_network_forward(nn, arrinput, lengthinput);
        matrix outputerror = matrix_op('-',target, output);
        matrix outputderive = matrix_map(output, fun_ptr);
        matrix outputdelta = matrix_op('*',outputerror, outputderive);
        matrix hiddenderive,hiddenerror;
        matrix hiddendeltalast = matrix_tr('c',outputdelta);
        matrix hiddendelta = matrix_tr('c',outputdelta);
        for (int i = 0; i < nn.size; i++) {
            matrix _t = matrix_tr('t',nn.weights[nn.size-i-1]);
            hiddenerror = matrix_op('.',hiddendelta,_t);
            matrix_free(_t);
            matrix_free(hiddendelta);
            hiddenderive = matrix_map(nn.tempforward[nn.size-i-1], fun_ptr);
            hiddendelta = matrix_op('*',hiddenerror, hiddenderive);
            matrix_free(hiddenerror);
            matrix_free(hiddenderive);
            _t = matrix_tr('t',nn.tempforward[nn.size-i-1]);
            matrix _dot = matrix_op('.',_t,hiddendeltalast);
            matrix _temp = nn.weights[nn.size-i-1];
            nn.weights[nn.size-i-1] =
            matrix_op('+',
                nn.weights[nn.size-i-1],
                _dot
            );
            matrix_free(_temp);
            matrix_free(_dot);
            matrix_free(_t);
            _temp = nn.bias[nn.size-i-1];
            nn.bias[nn.size-i-1] =
            matrix_op('+',
                nn.bias[nn.size-i-1],
                hiddendeltalast
            );
            matrix_free(_temp);
            matrix_free(hiddendeltalast);
            hiddendeltalast = matrix_tr('c',hiddendelta);

           
        }
        matrix_free(hiddendelta);
        matrix_free(hiddendeltalast);
        matrix_free(target);
        matrix_free(output);
        matrix_free(outputerror);
        matrix_free(outputderive);
        matrix_free(outputdelta);
    }
    void neural_network_free(neural_network nn) {
        for (int i = 0; i < nn.size; i++) {
            matrix_free(nn.weights[i]);
            matrix_free(nn.bias[i]);
            matrix_free(nn.tempforward[i]);
        }
        matrix_free(nn.tempforward[nn.size]);
        free(nn.weights);
        free(nn.bias);
        free(nn.tempforward);
    }
    void neural_network_export(neural_network nn, FILE *f, int binary) {
       
        // format:
        // <size>-(<rows>-<cols>-<matrix>)...
       
        if (f == NULL) {
            printf("%d-",nn.size);
        } else {
            if (binary == 1) {
                fwrite(&nn.size, sizeof(nn.size), 1, f);
            } else {
                fprintf(f,"%d-",nn.size);
            }
        }
        for (int i = 0; i < nn.size; i++) {
            matrix_print_inline(nn.weights[i],f,binary);
        }
        for (int i = 0; i < nn.size; i++) {
            matrix_print_inline(nn.bias[i],f,binary);
        }
        for (int i = 0; i <= nn.size; i++) {
            matrix_print_inline(nn.tempforward[i],f,binary);
        }
       
       
       
       
    }
    char * neural_network_importbinary(FILE *f) {
       
        int size = 0;
        fread(&size, sizeof(int), 1, f);
       
        int tempsize = 2;
        char *s = malloc(tempsize * sizeof(char));
        if (s == NULL) bad_memory_allocation();
        int len = 0;
       
        int length;
        char *str;
       
        length = snprintf(NULL, 0, "%d", size);
        str = (char *)malloc((length + 1) * sizeof(char));
        if (str == NULL) bad_memory_allocation();
        snprintf( str, length + 1, "%d", size);
        for (int i = 0; i < length; i++) putcharline(str[i], &s, &len, &tempsize);
        free(str);
       
        putcharline('-', &s, &len, &tempsize);
       
        int nbmatrix = size * 3 + 1;
        int index = 0;
       
        while (index < nbmatrix) {
           
            int rows = 0;
            fread(&rows, sizeof(int), 1, f);
            int cols = 0;
            fread(&cols, sizeof(int), 1, f);
           
            length = snprintf(NULL, 0, "%d", rows);
            str = (char *)malloc((length + 1) * sizeof(char));
            if (str == NULL) bad_memory_allocation();
            sprintf( str, "%d", rows);
            for (int i = 0; i < length; i++) putcharline(str[i], &s, &len, &tempsize);
            free(str);
           
            putcharline('-', &s, &len, &tempsize);
           
            length = snprintf(NULL, 0, "%d", cols);
            str = (char *)malloc((length + 1) * sizeof(char));
            if (str == NULL) bad_memory_allocation();
            sprintf(str, "%d", cols);
            for (int i = 0; i < length; i++) putcharline(str[i], &s, &len, &tempsize);
            free(str);
           
            putcharline('-', &s, &len, &tempsize);
           
            for (int y = 0; y < rows; y++) {
                for (int x = 0; x < cols; x++) {
                   
                    double d = (double)0;
                    fread(&d, sizeof(double), 1, f);
                   
                    length = snprintf(NULL, 0, "%.17f", d);
                    str = (char *)malloc((length + 1) * sizeof(char));
                    if (str == NULL) bad_memory_allocation();
                    snprintf( str, length + 1, "%.17f", d);
                    for (int i = 0; i < length; i++) putcharline(str[i], &s, &len, &tempsize);
                    free(str);
                   
                }
            }
           
            index++;
           
        }
       
        return s;
    }
    int neural_network_importcheck(char *data) {
       
        char *temp;
       
        // format:
        // <size>-(<rows>-<cols>-<matrix>)...
       
        int length = strlen(data);
        int j = 0;
       
        // get neural network size
        temp = trygetdata('-',data,&j,length);
        if (temp == NULL) return 1;
        if (isNumeric(temp) == 0) {free(temp);return 1;}
        int size = atoi(temp);
        free(temp);
       
        int k = 0;
        int l = 0;
       

        int check = 0;
        for (int i = j; i < length; i++) {
           
            // get matrix rows
            temp = trygetdata('-',data,&i,length);
            if (temp == NULL) return 1;
            if (isNumeric(temp) == 0) {free(temp);return 1;}
            int rows = atoi(temp);
            free(temp);
           
            // get matrix cols
            temp = trygetdata('-',data,&i,length);
            if (temp == NULL) return 1;
            if (isNumeric(temp) == 0) {free(temp);return 1;}
            int cols = atoi(temp);
            free(temp);
           
            // get matrix data
            matrix m = matrix_create(rows,cols);
            for (int y = 0; y < rows; y++) {
                for (int x = 0; x < cols; x++) {
                   
                    // get next double
                    char *firstpart = NULL;
                    char *secondpart = NULL;
                    firstpart = trygetdata('.',data,&i,length);
                    if (firstpart == NULL) {
                        matrix_free(m);
                        return 1;
                    }
                    secondpart = trygetdatan(17,data,&i,length);
                    if (secondpart == NULL) {
                        matrix_free(m);
                        return 1;
                    }
                    if (isNumeric(firstpart) == 0) {matrix_free(m);free(firstpart);return 1;}
                    if (isNumeric(secondpart) == 0) {matrix_free(m);free(secondpart);return 1;}
                   
                    char *_double = (char *)malloc((strlen(firstpart)+17+1) * sizeof(char));
                    if (_double == NULL) bad_memory_allocation();
                    strcpy(_double, firstpart);
                    strcat(_double, ".");
                    strcat(_double, secondpart);
                   
                    m.data[y][x] = atof(_double);
                   
                   
                    free(_double);
                    free(firstpart);
                    free(secondpart);
                   
                }
            }
            matrix_free(m);
           
            if (i == length) {
                break;
            } else if (l > length) {
               return 1;
            }
           
            l++;
            if (k < 2 && l == size) {
                l = 0;
                k++;
            }
            i--;
           
           
            if (check >= length) {
                return 1;
            }
           
            check++;
           
        }
       
        return 0;
    }
    neural_network neural_network_import(char *data, int *success) {
       
        *success = 0;
       
        if (neural_network_importcheck(data) == 1) {
            neural_network nn;
            return nn;
        }
       
        char *temp;
       
        // format:
        // <size>-(<rows>-<cols>-<matrix>)...
       
        int length = strlen(data);
        int j = 0;
       
        // get neural network size
        temp = trygetdata('-',data,&j,length);
        int size = atoi(temp);
        free(temp);
       
        matrix *weights = (matrix *)malloc((size) * sizeof(matrix));
        if (weights == NULL) bad_memory_allocation();
        matrix *bias = (matrix *)malloc((size) * sizeof(matrix));
        if (bias == NULL) bad_memory_allocation();
        matrix *tempn = (matrix *)malloc((size+1) * sizeof(matrix));
        if (tempn == NULL) bad_memory_allocation();
       
        int k = 0;
        int l = 0;

        int check = 0;
        for (int i = j; i < length; i++) {
           
            // get matrix rows
            temp = trygetdata('-',data,&i,length);
            int rows = atoi(temp);
            free(temp);
           
            // get matrix cols
            temp = trygetdata('-',data,&i,length);
            int cols = atoi(temp);
            free(temp);
           
            // get matrix data
            if (k == 0) {
                weights[l] = matrix_create(rows,cols);
            } else if (k == 1) {
                bias[l] = matrix_create(rows,cols);
            } else {
                tempn[l] = matrix_create(rows,cols);
            }
            for (int y = 0; y < rows; y++) {
                for (int x = 0; x < cols; x++) {
                   
                    // get next double
                    char *firstpart = NULL;
                    char *secondpart = NULL;
                    firstpart = trygetdata('.',data,&i,length);
                    secondpart = trygetdatan(17,data,&i,length);
                   
                    char *_double = (char *)malloc((strlen(firstpart)+17+1) * sizeof(char));
                    if (_double == NULL) bad_memory_allocation();
                    strcpy(_double, firstpart);
                    strcat(_double, ".");
                    strcat(_double, secondpart);
                   
                    if (k == 0) {
                        weights[l].data[y][x] = atof(_double);
                    } else if (k == 1) {
                        bias[l].data[y][x] = atof(_double);
                    } else {
                        tempn[l].data[y][x] = atof(_double);
                    }
                   
                   
                    free(_double);
                    free(firstpart);
                    free(secondpart);
                   
                }
            }
           
            if (i == length) {
                break;
            } else if (l > length) {
                fprintf(stderr, "Err: bad data\n");
                exit(1);
            }
           
            l++;
            if (k < 2 && l == size) {
                l = 0;
                k++;
            }
            i--;
           
            if (check >= length) {
                fprintf(stderr, "Err: bad data\n");
                exit(1);
            }
           
            check++;
           
        }
       
        neural_network nn = { weights, size, bias, tempn };
       
        *success = 1;
       
        return nn;
       
    }





    /***********************\
    *                       *
    * COMMANDS DECLARATIONS *
    *                       *
    \***********************/

    void nn_create(char *, int *, int);
    void nn_delete(char *);
    void nn_evaluate(char *, double *, int, char *, int, int);
    void nn_export(char *, char *, int, int, int);
    void nn_import(char *, char *, char *, int);
    void nn_train(char *, double *, int, double *, int, int);
    void nn_merge(char *, char *, char *, int);
    void nn_mutate(char *, double);
    void nn_rename(char *, char *, int);
    void nn_getadn(char *, char*, int, int, int);





    /**********************\
    *                      *
    *      NN OBJECT       *
    *                      *
    \**********************/

    void neural_network_objects_init(neural_network_objects *obj) {
        obj->length = 0;
        obj->capacity = 1;
        obj->nns = (neural_network_object *)malloc(obj->capacity * sizeof(neural_network_object));
        if (obj->nns == NULL) bad_memory_allocation();
    }
    void neural_network_objects_insert(neural_network_objects *obj, char *name, int *arr, int length) {
        if (obj->length+1 >= obj->capacity) {
            obj->capacity *= 2;
            neural_network_object* temp = (neural_network_object*)realloc(obj->nns, (obj->capacity+1) * sizeof(neural_network_object));
            if (temp == NULL) bad_memory_allocation();
            obj->nns = temp;
        }
        neural_network nn = neural_network_create(arr,length);
        neural_network_object nnobj;
        nnobj.nn = nn;
        char *temp2 = (char *)malloc((strlen(name)+1)*sizeof(char));
        if (temp2 == NULL) bad_memory_allocation();
        strcpy(temp2, name);
        nnobj.name = temp2;
        obj->nns[obj->length] = nnobj;
        obj->length++;
    }
    void neural_network_objects_insertnn(neural_network_objects *obj, char *name, neural_network nn) {
        if (obj->length+1 >= obj->capacity) {
            obj->capacity *= 2;
            neural_network_object* temp = (neural_network_object*)realloc(obj->nns, (obj->capacity+1) * sizeof(neural_network_object));
            if (temp == NULL) bad_memory_allocation();
            obj->nns = temp;
        }
        neural_network_object nnobj;
        nnobj.nn = nn;
        char *temp = (char *)malloc((strlen(name)+1)*sizeof(char));
        if (temp == NULL) bad_memory_allocation();
        strcpy(temp, name);
        nnobj.name = temp;
        obj->nns[obj->length] = nnobj;
        obj->length++;
    }
    neural_network_object neural_network_objects_get(neural_network_objects *obj, char *name) {
        for (int i = 0; i < obj->length; i++) {
            if (strcmp(obj->nns[i].name,name) == 0) {
                return obj->nns[i];
            }
        }
        fprintf(stderr, "Err: Unknow \"%s\" Neural Network\n",name);
        exit(1);
    }
    int neural_network_objects_is_exist(neural_network_objects *obj, char *name) {
        for (int i = 0; i < obj->length; i++) {
            if (strcmp(obj->nns[i].name,name) == 0) {
                return 1;
            }
        }
        return 0;
    }
    void neural_network_objects_remove(neural_network_objects *obj, char *name) {
        int res = -1;
        for (int i = 0; i < obj->length; i++) {
            if (strcmp(obj->nns[i].name,name) == 0) {
                res = i;
                neural_network_free(obj->nns[i].nn);
                free(obj->nns[i].name);
                break;
            }
        }
        if (res >= 0) {
            for (int i = res; i < obj->length-1; i++) {
                obj->nns[i] = obj->nns[i+1];
            }
            obj->length--;
            if (obj->length < (obj->capacity/2)+1) {
                obj->capacity /= 2;
                obj->capacity++;
                neural_network_object* temp = (neural_network_object*)realloc(obj->nns, (obj->capacity+1) * sizeof(neural_network_object));
                if (temp == NULL) bad_memory_allocation();
                obj->nns = temp;
            }
        }
    }





    /**********************\
    *                      *
    *    PARSER ARGUMENTS  *
    *                      *
    \**********************/

    int process_command(int, char**); // DECLARATION
    char*** parse_multi_arguments(int argc, char *argv[], int **lengths) {
        char ***_argv = (char ***)malloc(sizeof(char **));
        if (_argv == NULL) bad_memory_allocation();
        *lengths = (int *)malloc(2*sizeof(int));
        if (lengths == NULL) bad_memory_allocation();
        int indexlength = 1;
        int sindex = 0;
        int index = 0;
        char **_temp = (char **)malloc(sizeof(char *));
        if (_temp == NULL) bad_memory_allocation();
        for (int i = 0; i < argc; i++) {
            int len = strlen(argv[i]);
            if (len > 0) {
                if (argv[i][0] == '/') {
                    _argv[sindex] = _temp;
                    sindex++;
                    (*lengths)[0] = sindex;
                    (*lengths)[indexlength] = index;
                    indexlength++;
                    int *realloctemp = (int *)realloc(*lengths,(indexlength+1)*sizeof(int));
                    if (realloctemp == NULL) bad_memory_allocation();
                    *lengths = realloctemp;
                    char ***realloctemp2 = (char ***)realloc(_argv,(sindex+1)*sizeof(char **));
                    if (realloctemp2 == NULL) bad_memory_allocation();
                    _argv = realloctemp2;
                    _temp = (char **)malloc(sizeof(char *));
                    if (_temp == NULL) bad_memory_allocation();
                    index = 0;
                }
            }
            _temp[index] = argv[i];
            index++;
            char **realloctemp3 = (char **)realloc(_temp,(index+1) * sizeof(char *));
            if (realloctemp3 == NULL) bad_memory_allocation();
            _temp = realloctemp3;
        }
        _argv[sindex] = _temp;
        sindex++;
        (*lengths)[0] = sindex;
        (*lengths)[indexlength] = index;
        return _argv;
    }
    void parse_multi_arguments_free(char ***arr, int *lengths) {
        for (int i = 0; i < lengths[0]; i++) {
            free(arr[i]);
        }
        free(arr);
        free(lengths);
    }
    double* get_arguments_double(int argc, char *argv[], char *d, int *length) {
        int b = 0;
        int len = 0;
        int capacity = 2;
        double *arr = (double *)malloc((capacity+10)*sizeof(double));
        if (arr == NULL) bad_memory_allocation();
        for (int i = 0; i < argc; i++) {
            if (strcmp(argv[i],d) == 0) {
                b = 1;
                continue;
            }
            if (b == 1) {
                if (isNumeric(argv[i])) {
                    if(len == capacity){
                        capacity *= 2;
                        double *realloctemp = (double *)realloc(arr, (capacity+10) * sizeof(double));
                        if (realloctemp == NULL) bad_memory_allocation();
                        arr = realloctemp;
                    }
                    arr[len] = atof(argv[i]);
                    len++;
                } else {
                    break;
                }
            }
        }
        if (b == 0) {
            fprintf(stderr, "Err: \"%s\" was expected\n",d);
            exit(1);
        }
        if (len == 0) {
            fprintf(stderr, "Err: Can't found arguments for \"%s\"\n",d);
            exit(1);
        }
        *length = len;
        arr[len] = (double)0.0;
        arr[len+1] = (double)0.0;
        arr[len+2] = (double)0.0;
        return arr;
    }
    int* get_arguments_int(int argc, char *argv[], char *d, int *length) {
        int b = 0;
        int len = 0;
        int capacity = 2;
        int *arr = (int *)malloc((capacity+10)*sizeof(int));
        if (arr == NULL) bad_memory_allocation();
        for (int i = 0; i < argc; i++) {
            if (strcmp(argv[i],d) == 0) {
                b = 1;
                continue;
            }
            if (b == 1) {
                if (isNumeric(argv[i])) {
                    if(len == capacity){
                        capacity *= 2;
                        int *realloctemp = (int *)realloc(arr, (capacity+10) * sizeof(int));
                        if (realloctemp == NULL) bad_memory_allocation();
                        arr = realloctemp;
                    }
                    arr[len] = atof(argv[i]);
                    len++;
                } else {
                    break;
                }
            }
        }
        if (b == 0) {
            fprintf(stderr, "Err: \"%s\" was expected\n",d);
            exit(1);
        }
        if (len == 0) {
            fprintf(stderr, "Err: Can't found arguments for \"%s\"\n",d);
            exit(1);
        }
        *length = len;
        arr[len] = 0;
        arr[len+1] = 0;
        arr[len+2] = 0;
        return arr;
    }
    int get_argument(int argc, char *argv[], char *d, char **arr) {
        int b = 0;
        int c = 0;
        for (int i = 0; i < argc; i++) {
            if (strcmp(argv[i],d) == 0) {
                b = 1;
                if (arr == NULL) {
                    c = 1;
                    break;
                }
                continue;
            }
            if (b == 1) {
                c = 1;
                *arr = argv[i];
                break;
            }
        }
        if (c == 0) {
            return 1;
        }
        return 0;
    }
    char **parse_arguments(char *_line, int length, int *_argc) {
        char* line = trim(_line);
        length = strlen(line);
        char **argv = (char **)malloc(sizeof(char *));
        if (argv == NULL) bad_memory_allocation();
        int current_size = 2;
        int index = 0;
        int i = 0;
        int count = 0;
        int b = 0;
        char c;
        char *temp = (char *)malloc(current_size * sizeof(char));
        if (temp == NULL) bad_memory_allocation();
        while (i < length) {
            c = line[i];
            if (i > 0) {
                if (b == 0) {
                    if (line[i-1] == ' ' && c == ' ') {
                        i++;
                        continue;
                    }
                }
            }
            if (c == '"') {
                if (b == 0) {
                    b = 1;
                } else {
                    b = 0;
                    if (i < length-1) {
                        if (line[i+1] != ' ') {
                            if (c != '"') putcharline(c,&temp,&index,&current_size);
                            putcharline(line[i+1],&temp,&index,&current_size);
                            i++;i++;
                            continue;
                        }
                    }
                }
            }
            if (b == 0) {
                if (c != ' ') {
                    if (c != '"') putcharline(c,&temp,&index,&current_size);
                } else {
                    current_size = 2;
                    index = 0;
                    argv[count] = temp;
                    temp = (char *)malloc(current_size * sizeof(char));
                    if (temp == NULL) bad_memory_allocation();
                    count++;
                    char **realloctemp = (char **)realloc(argv,(count+1) * sizeof(char *));
                    if (realloctemp == NULL) bad_memory_allocation();
                    argv = realloctemp;
                }
            } else {
                if (c != '"') {putcharline(c,&temp,&index,&current_size);}
            }
           
            i++;
        }
        argv[count] = temp;
        count++;
        *_argc = count;
        return argv;
    }





    /**********************\
    *                      *
    *     PROCESS DATA     *
    *                      *
    \**********************/

    void process_pipe(int argc, char *argv[]) {
        char *line = (char *)malloc(2 * sizeof(char));
        if (line == NULL) bad_memory_allocation();
        int ch;
        long capacity = 0;
        long length = 0;
        char *temp = NULL;
        int temp_display = 0;
        while (1) {
           
            ch = getc(stdin);
            if (ch == 12) {
                system("cls");
                continue;
            }
           
            // just redirect to stdout
            if ((display == 1 && temp_display == 1) || engine_enabled == 0) {
                putc(ch,stdout);
            }
           
           
            if (ch != '\n' && ch != EOF) {
                if((length + 1) >= capacity) {
                    if (capacity == 0) {
                        capacity = 2;
                    } else {
                        capacity *= 2;
                    }
                    temp = (char *)realloc(line, capacity * sizeof(char));
                    if (temp == NULL) bad_memory_allocation();
                    line = temp;
                }
                line[length] = (char) ch;
                length++;
                line[length] = '\0';
                if (length > 5 && strcmp(line, "/hide") == 0 && engine_enabled == 1) {
                    display = 0;
                   
                } else if (display != 0 && length > 5 && temp_display == 0 && engine_enabled == 1) {
                    printf(line);
                    temp_display = 1;
                }
            } else {
               
                temp = (char *)realloc(line, (length + 1) * sizeof(char));
                if (temp == NULL) bad_memory_allocation();
                line = temp;
                line[length] = '\0';
               
               
                int _argc;
                char **_argv = parse_arguments(line, length, &_argc);
               
               
                int *lengths;
                char*** multiarrayargs = parse_multi_arguments(_argc, _argv, &lengths);
                for (int i = 0; i < lengths[0]; i++) {
                    int r = process_command(lengths[i+1], multiarrayargs[i]);
                    if (r == 2) {
                        parse_multi_arguments_free(multiarrayargs,lengths);
                        for (int i = 0; i < _argc; i++) {
                            free(_argv[i]);
                        }
                        free(_argv);
                        return;
                    }
                }
                parse_multi_arguments_free(multiarrayargs,lengths);
               
                for (int i = 0; i < _argc; i++) {
                    free(_argv[i]);
                }
                free(_argv);
               
                // reset
                capacity = 0;
                length = 0;
                temp_display = 0;
               
            }
           
        }
    }
    int process_command(int argc, char *argv[]) {
        if (argc == 0) return 1;
        if (strcmp(argv[0], "/enable") == 0) {
            if (engine_enabled == 0) {
                if (engine_use_check == 1) {
                    if (argc <= 1) return 1;
                    if (engine_check != atoi(argv[1])) return 1;
                }
            }
            engine_enabled = 1;
            return 0;
        }
        if (strcmp(argv[0], "/disable") == 0) {
            if (argc > 1) {
                engine_check = atoi(argv[1]);
                engine_use_check = 1;
            } else {
                engine_use_check = 0;
            }
            engine_enabled = 0;
            return 0;
        }
        if (strcmp(argv[0], "/show") == 0) {
            if (engine_enabled == 0) {
                if (engine_use_check == 1) {
                    if (argc <= 1) return 1;
                    if (engine_check != atoi(argv[1])) return 1;
                }
            }
            display = 1;
            return 0;
        }
        if (strcmp(argv[0], "/hide") == 0) {
            if (engine_enabled == 0) {
                if (engine_use_check == 1) {
                    if (argc <= 1) return 1;
                    if (engine_check != atoi(argv[1])) return 1;
                }
            }
            display = 0;
            return 0;
        }
        if (engine_enabled == 0) return 1;
        if (strcmp(argv[0], "/print") == 0) {
            if (argc > 1) {
                printf("%s\n",argv[1]);
            } else {
                printf("\n");
            }
            return 0;
        }
        if (strcmp(argv[0], "/printn") == 0) {
            if (argc > 1) {
                printf("%s",argv[1]);
            }
            return 0;
        }
        if (strcmp(argv[0], "/about") == 0) {
            printf("Creator: Flammrock\n");
            printf("License: MIT\n");
            printf("Year   : 2020\n");
            return 0;
        }
        if (strcmp(argv[0], "/help") == 0) {
            if (argc == 1) {
                printf("list of available commands: \n");
                printf("\n");
                printf("      /create <name> --structure <layer>...\n");
                printf("      /delete <name>\n");
                printf("      /rename <name> <new_name> [--noerr]\n");
                printf("      /export <name> [--out <file>] [--append]\n");
                printf("      /import <name> <new_name> (--file <file> | --string <string>)\n");
                printf("      /evaluate <name> --input <double>... [--normalize (<min> <max>)...] [--out <file>] [--append]\n");
                printf("      /train <name> --input <double>... --output <double>... [[--normalize (<min> <max>)...] | [--normalize-input (<min> <max>)...] [--normalize-output (<min> <max>)...]] [--loop <int>]\n");
                printf("      /merge <parent_name_1> <parent_name_2> <child_name> [--mixing-power <int>]\n");
                printf("      /mutate <name> [--rate <double>]\n");
                printf("      /getdna <name> [--out <file>] [--append] [--maxlengthline]\n");
                printf("      /enable [<int>]\n");
                printf("      /disable [<int>]\n");
                printf("      /print <string>\n");
                printf("      /printn <string>\n");
                printf("      /show [<int>]\n");
                printf("      /hide [<int>]\n");
                printf("      /about\n");
                printf("      /exit\n");
                printf("      /help [<command_name>]\n\n");
            } else {
               
               
                if (strcmp(argv[1], "/create") == 0 || strcmp(argv[1], "create") == 0) {
                    printf("\n    '/create <name> --structure <layer>...'\n\n  This command create a neural network\n");
                    printf("     <name>              : Just the name of the neural network, this name is used as a unique ID\n");
                    printf("                         If a neural network already have this name, an error is returned (failed to rename)\n");
                    printf("     --structure <layer> : Allows to define the structure of the neural network\n");
                    printf("                         This first integer is for the \"input layer\" while the last integer is for the \"output layer\"\n");
                    printf("                         sample: --structure 2 5 1 (That create a neural network with two neurons in input layer, five\n");
                    printf("                         neurons in hidden layer and 1 neuron in output layer\n\n");
                } else if (strcmp(argv[1], "/delete") == 0 || strcmp(argv[1], "delete") == 0) {
                    printf("\n    '/delete <name>'\n\n  This command delete a neural network\n");
                    printf("     <name> : Delete the neural network which have this name. If no neural networks have this name, no error is returned\n\n");
                } else if (strcmp(argv[1], "/rename") == 0 || strcmp(argv[1], "rename") == 0) {
                    printf("\n    '/rename <name> <new_name> [--noerr]'\n\n  This command rename a neural network\n");
                    printf("     <name>     : the old name\n");
                    printf("     <new_name> : the new name If a neural network already have this name, an error is returned (failed to rename)\n");
                    printf("    Optional:\n");
                    printf("     --noerr : if the neural network <name> not exist or if the neural network <new_name> already exist, no error is returned if this parameter is specified\n\n");
                } else if (strcmp(argv[1], "/print") == 0 || strcmp(argv[1], "print") == 0) {
                    printf("\n    '/print <string>'\n\n  This command print a string\n");
                    printf("     <string> : a simple string\n\n");
                } else if (strcmp(argv[1], "/printn") == 0 || strcmp(argv[1], "printn") == 0) {
                    printf("\n    '/printn <string>'\n\n  This command print a string without new line character\n");
                    printf("     <string> : a simple string\n\n");
                } else if (strcmp(argv[1], "/export") == 0 || strcmp(argv[1], "export") == 0) {
                    printf("\n    '/export <name> [--out <file>] [--append]'\n\n  This command export a neural network to a file or print it in the console\n");
                    printf("     <name>       : name of the neural network\n");
                    printf("    Optional:\n");
                    printf("     --out <file> : write the neural network into a file\n");
                    printf("     --append     : if specified, the neural network is appended to the file (need the previous argument)\n\n");
                } else if (strcmp(argv[1], "/import") == 0 || strcmp(argv[1], "import") == 0) {
                    printf("\n    '/import <name> <new_name> (--file <file> | --string <string>)'\n\n  This command imports a neural network that was exported before\n");
                    printf("     <name>            : name of the neural network to import\n");
                    printf("     <new_name>        : new name of the neural network (can be the same as the argument \"<name>\")\n");
                    printf("     --file <file>     : file where the neural network is located\n");
                    printf("     --string <string> : representation of a neural network\n\n");
                } else if (strcmp(argv[1], "/evaluate") == 0 || strcmp(argv[1], "evaluate") == 0) {
                    printf(" '/evaluate <name> --input <double>... [--normalize (<min> <max>)...] [--out <file>]'\n\n  This command evaluate a neural network with a specified input\n");
                    printf("     <name>              : name of the neural network\n");
                    printf("     --input <double>... : the specified input must fit the first layer of the neural network\n");
                    printf("                         if the structure of the neural network is like : 5 8 8 9 3 2\n");
                    printf("                         we need to pass an input of 5 numbers like that : 0.26 0.56 0.89 0.42 0.12\n");
                    printf("    Optional:\n");
                    printf("     --out <file>            : write the result to a file (if not specified, the result is print in the console)\n");
                    printf("     --normalize (<min> <max>)... : normalize each double of the \"--input\" parameter between <min> and <max>\n\n");
                } else if (strcmp(argv[1], "/train") == 0 || strcmp(argv[1], "train") == 0) {
                    printf("\n    '/train <name> --input <double>... --output <double>... [[--normalize (<min> <max>)...] | [--normalize-input (<min> <max>)...] [--normalize-output (<min> <max>)...]] [--loop <int>]'\n\n  This command train a neural network with inputs data and outputs data\n");
                    printf("     <name>              : name of the neural network\n");
                    printf("     --input <double>... : the specified input must fit the first layer of the neural network\n");
                    printf("                         if the structure of the neural network is like : 5 8 8 9 3 2\n");
                    printf("                         we need to pass an input of 5 numbers like that : 0.26 0.56 0.89 0.42 0.12\n");
                    printf("     --output <double>... : the specified output must fit the last layer of the neural network\n");
                    printf("                         if the structure of the neural network is like : 5 8 8 9 3 2\n");
                    printf("                         we need to pass an output of 2 numbers like that : 0.12 0.86\n\n");
                    printf("      But for accurate training, we can pass more than 5 numbers or 2 numbers, but for a better understanding of that,\n");
                    printf("      the XOR example will be nice to study :)\n\n");
                    printf("    Optional:\n");
                    printf("     --loop <int>            : number of iterations\n");
                    printf("                             (the neural network learn very slowly because the global weights and bias follow the gradient of partials derivatives)\n");
                    printf("        default: 10000\n");
                    printf("     --normalize (<min> <max>)... : normalize each double of the \"--input\" and \"--output\" parameters between <min> and <max>\n");
                    printf("     --normalize-input (<min> <max>)... : same as --normalize but normalize only the input\n");
                    printf("     --normalize-output (<min> <max>)... : same as --normalize but normalize only the output\n\n");
                } else if (strcmp(argv[1], "/merge") == 0 || strcmp(argv[1], "merge") == 0) {
                    printf("\n    '/merge <parent_name_1> <parent_name_2> <child> [--mixing-power <int>]'\n\n  This command merge two neurals networks (with same structure) into one neural network\n");
                    printf("     <parent_name_1> : name of a neural network\n");
                    printf("     <parent_name_2> : name of a neural network\n");
                    printf("     <child>         : name of the neural network\n");
                    printf("    Optional:\n");
                    printf("     --mixing-power <int> : the strength of mixing (not proportional to the size of neurals networks)\n");
                    printf("                          so more neural networks are big, more the mixing power must be big to \"preserve\" the same mixing as little neural networks\n");
                    printf("        default: 15\n\n");
                } else if (strcmp(argv[1], "/mutate") == 0 || strcmp(argv[1], "mutate") == 0) {
                    printf("\n    '/mutate <name> [--rate <double>]'\n\n  This command mutate a neural network (some weights and some bias are randomly mutate)\n");
                    printf("     <name> : name of the neural network\n");
                    printf("    Optional:\n");
                    printf("     --rate <double> : must be between 0 and 1. And must be very small too like \"0.002\" (not proportional to the size of neurals networks)\n");
                    printf("                     so more neural networks are big, more the rate must be big to \"preserve\" the same rate as little neural networks\n");
                    printf("        default: 0.001\n\n");
                } else if (strcmp(argv[1], "/getdna") == 0 || strcmp(argv[1], "getdna") == 0) {
                    printf("\n    '/getdna <name> [--out <file>] [--maxlengthline <int>]'\n\n  This command write the dna of a neural network in a file or print it in the console\n");
                    printf("     <name> : name of the neural network\n");
                    printf("    Optional:\n");
                    printf("     --out <file>          : write the dna to a file (if not specified, the dna is print in the console)\n");
                    printf("     --maxlengthline <int> : max line length (-1 for infini), add '\n' every --maxlengthline character(s)\n");
                    printf("        default: -1\n\n");
                    printf("      This command was just made for fun and this command will actually be useless.\n\n");
                } else if (strcmp(argv[1], "/enable") == 0 || strcmp(argv[1], "enable") == 0) {
                    printf("\n    '/enable [<int>]'\n\n  This command enable the engine that execute command (ENABLED BY DEFAULT)\n");
                    printf("    Optional:\n");
                    printf("     <int> : if the engine was disabled with an optional argument,\n");
                    printf("           this argument must be specified and be the same when\n");
                    printf("           engine was disabled else the engine isn't enabled\n\n");
                } else if (strcmp(argv[1], "/disable") == 0 || strcmp(argv[1], "disable") == 0) {
                    printf("\n    '/disable [<int>]'\n\n  This command enable the engine that execute command (ENABLED BY DEFAULT)\n");
                    printf("    Optional:\n");
                    printf("     <int> : this disable the engine with a \"key\" and others commands\n");
                    printf("           this argument must be specified and be the same when\n");
                    printf("           engine was disabled else the engine isn't enabled\n\n");
                } else if (strcmp(argv[1], "/show") == 0 || strcmp(argv[1], "show") == 0) {
                    printf("\n    '/show [<int>]'\n\n  This command enable the engine that execute command (ENABLED BY DEFAULT)\n");
                    printf("    Optional:\n");
                    printf("     <int> : if the engine was disabled with an optional argument,\n");
                    printf("           like \"enable\", \"show\" and \"hide\" must be have the same argument to work\n\n");
                } else if (strcmp(argv[1], "/hide") == 0 || strcmp(argv[1], "hide") == 0) {
                    printf("\n    '/hide [<int>]'\n\n  This command enable the engine that execute command (ENABLED BY DEFAULT)\n");
                    printf("    Optional:\n");
                    printf("     <int> : if the engine was disabled with an optional argument,\n");
                    printf("           this argument must be specified and be the same when\n");
                    printf("           engine was disabled else the display isn't disabled\n\n");
                } else if (strcmp(argv[1], "/help") == 0 || strcmp(argv[1], "help") == 0) {
                    printf("\n    '/help [<command_name>]'\n\n  This command display the list of commands or the description of the specified command\n");
                    printf("    Optional:\n");
                    printf("     <command_name> : display more details about the command\n");
                } else if (strcmp(argv[1], "/exit") == 0 || strcmp(argv[1], "exit") == 0) {
                    printf("\n    '/exit'\n\n  This command simply close the processus (useful when you pipe some data to indicate that the pipe is finished)\n\n");
                } else if (strcmp(argv[1], "/about") == 0 || strcmp(argv[1], "about") == 0) {
                    printf("\n    '/about'\n\n  This command display some information about the creator\n\n");
                }
            }
            return 0;
        }
        if (strcmp(argv[0], "/setprecision") == 0) {
            if (argc <= 1) {
                bad_argument();
            }
            __precision = atoi(argv[1]);
            if (__precision < 0) __precision = 0;
            return 0;
        }
        if (strcmp(argv[0], "/setscale") == 0) {
            if (argc <= 1) {
                bad_argument();
            }
            __scale = atof(argv[1]);
            return 0;
        }
        if (strcmp(argv[0], "/create") == 0) {
           
            if (argc <= 1) {
                bad_argument();
            }
           
            char *name = argv[1];
           
            int length;
            int *arr = get_arguments_int(argc,argv,"--structure", &length);
           
            nn_create(name, arr, length);
           
            free(arr);
           
            return 0;
        }
        if (strcmp(argv[0], "/delete") == 0) {
            if (argc <= 1) {
                bad_argument();
            }
            char *name = argv[1];
           
            nn_delete(name);
            return 0;
        }
        if (strcmp(argv[0], "/import") == 0) {
           
            if (argc <= 2) {
                bad_argument();
            }
           
            char *name = argv[1];
            char *new_name = argv[2];
           
            char *filepath;
            int r = get_argument(argc,argv,"--file", &filepath);
           
            char *string;
            int s = get_argument(argc,argv,"--string", &string);
           
            if (r == 0) {
                nn_import(name,new_name,filepath,0);
            } else if (s == 0) {
                nn_import(name,new_name,string,1);
            }
           
            return 0;
        }
        if (strcmp(argv[0], "/export") == 0) {
           
            if (argc <= 1) {
                bad_argument();
            }
           
            char *name = argv[1];
           
            char *filepath;
            int r = get_argument(argc,argv,"--out", &filepath);
           
            int b = get_argument(argc,argv,"--binary",NULL);
           
            int append = get_argument(argc,argv,"--append", NULL);
           
            nn_export(name,filepath,r,b==0,append);
           
            return 0;
        }
        if (strcmp(argv[0], "/evaluate") == 0) {
           
            if (argc <= 1) {
                bad_argument();
            }
           
            char *name = argv[1];
           
            int length;
            double *arr = get_arguments_double(argc,argv,"--input", &length);
           
            char *filepath;
            int r = get_argument(argc,argv,"--out", &filepath);
           
            int append = get_argument(argc,argv,"--append", NULL);
           
            int ntest = get_argument(argc,argv,"--normalize", NULL);
            if (ntest == 0) {
                int lengthn;
                double *n = get_arguments_double(argc,argv,"--normalize", &lengthn);
                if (lengthn > 1) {
                    if (lengthn == 2) {
                        for (int i = 0; i < length; i++) {
                            arr[i] = (arr[i] - n[0]) / (n[1] - n[0]);
                        }
                    } else {
                        if (lengthn != length * 2) {
                            fprintf(stderr,"Err: normalize values not fit input size\n");
                            exit(1);
                        }
                        int j = 0;
                        for (int i = 0; i < length; i++) {
                            arr[i] = (arr[i] - n[j]) / (n[j+1] - n[j]);
                            j+=2;
                        }
                    }
                   
                }
            }
           
            nn_evaluate(name,arr,length,filepath,r,append);
           
            free(arr);
           
           
            return 0;
        }
        if (strcmp(argv[0], "/train") == 0) {
           
            if (argc <= 1) {
                bad_argument();
            }
           
            char *name = argv[1];
           
            char *nb;
            int r = get_argument(argc,argv,"--loop", &nb);
            int count = 10000;
            if (r == 0) {
                count = atoi(nb);
            }
           
            int lengthinput;
            double *arrinput = get_arguments_double(argc,argv,"--input", &lengthinput);
           
            int lengthoutput;
            double *arroutput = get_arguments_double(argc,argv,"--output", &lengthoutput);

            int ntest = get_argument(argc,argv,"--normalize", NULL);
            int use_normalize = 0;
            if (ntest == 0) {
                int lengthn;
                double *n = get_arguments_double(argc,argv,"--normalize", &lengthn);
                if (lengthn > 1) {
                    if (lengthn == 2) {
                        for (int i = 0; i < lengthinput; i++) {
                            arrinput[i] = (arrinput[i] - n[0]) / (n[1] - n[0]);
                        }
                        for (int i = 0; i < lengthoutput; i++) {
                            arroutput[i] = (arroutput[i] - n[0]) / (n[1] - n[0]);
                        }
                        use_normalize = 1;
                    } else {
                        if (lengthn != lengthinput * 2 && lengthn != lengthoutput * 2) {
                            fprintf(stderr,"Err: normalize values not fit input and output size\n");
                            exit(1);
                        }
                        int j = 0;
                        for (int i = 0; i < lengthinput; i++) {
                            arrinput[i] = (arrinput[i] - n[j]) / (n[j+1] - n[j]);
                            j+=2;
                        }
                        j = 0;
                        for (int i = 0; i < lengthoutput; i++) {
                            arroutput[i] = (arroutput[i] - n[j]) / (n[j+1] - n[j]);
                            j+=2;
                        }
                        use_normalize = 1;
                    }
                }
            }
            if (use_normalize == 0) {
                int ntest_ = get_argument(argc,argv,"--normalize-input", NULL);
                int ntest_2 = get_argument(argc,argv,"--normalize-output", NULL);
                if (ntest_ == 0) {
                    int lengthn_;
                    double *n_ = get_arguments_double(argc,argv,"--normalize-input", &lengthn_);
                    if (lengthn_ > 1) {
                        if (lengthn_ == 2) {
                            for (int i = 0; i < lengthinput; i++) {
                                arrinput[i] = (arrinput[i] - n_[0]) / (n_[1] - n_[0]);
                            }
                        } else {
                            if (lengthn_ != lengthinput * 2) {
                                fprintf(stderr,"Err: normalize values not fit input size\n");
                                exit(1);
                            }
                            int j = 0;
                            for (int i = 0; i < lengthinput; i++) {
                                arrinput[i] = (arrinput[i] - n_[j]) / (n_[j+1] - n_[j]);
                                j+=2;
                            }
                        }
                    }
                }
                if (ntest_2 == 0) {
                    int lengthn2_;
                    double *n2_ = get_arguments_double(argc,argv,"--normalize-output", &lengthn2_);
                    if (lengthn2_ > 1) {
                        if (lengthn2_ == 2) {
                            for (int i = 0; i < lengthoutput; i++) {
                                arroutput[i] = (arroutput[i] - n2_[0]) / (n2_[1] - n2_[0]);
                            }
                        } else {
                            if (lengthn2_ != lengthoutput * 2) {
                                fprintf(stderr,"Err: normalize values not fit output size\n");
                                exit(1);
                            }
                            int j = 0;
                            for (int i = 0; i < lengthoutput; i++) {
                                arroutput[i] = (arroutput[i] - n2_[j]) / (n2_[j+1] - n2_[j]);
                                j+=2;
                            }
                        }
                    }
                }
            }

            nn_train(name,arrinput,lengthinput,arroutput,lengthoutput,count);
                       
            free(arrinput);
            free(arroutput);
           

            return 0;
        }
        if (strcmp(argv[0], "/merge") == 0) {
           
            if (argc <= 3) {
                bad_argument();
            }
           
            char *name_a = argv[1];
            char *name_b = argv[2];
            char *name_c = argv[3];
           
            char *nb;
            int r = get_argument(argc,argv,"--mixing-power", &nb);
            int count = 15;
            if (r == 0) {
                count = atoi(nb);
            }
           
           
           
            nn_merge(name_a,name_b,name_c,count);
            return 0;
        }
        if (strcmp(argv[0], "/mutate") == 0) {
            if (argc <= 1) {
                bad_argument();
            }
            char *name = argv[1];
            char *nb;
            int r = get_argument(argc,argv,"--mutationbitrate", &nb);
            double rate = 0.001;
            if (r == 0) {
                rate = atof(nb);
            }
            nn_mutate(name,rate);
            return 0;
        }
        if (strcmp(argv[0], "/rename") == 0) {
            if (argc <= 2) {
                bad_argument();
            }
            int strict = get_argument(argc,argv,"--noerr", NULL);
            char *name = argv[1];
            char *new_name = argv[2];
            nn_rename(name,new_name,strict);
            return 0;
        }
        if (strcmp(argv[0], "/getdna") == 0) {
            if (argc <= 1) {
                bad_argument();
            }
            char *name = argv[1];
            char *filepath;
            int r = get_argument(argc,argv,"--out", &filepath);
            char *nb2;
            int r2 = get_argument(argc,argv,"--maxlengthline", &nb2);
            int maxlength = -1;
            if (r2 == 0) {
                maxlength = atoi(nb2);
            }
            int append = get_argument(argc,argv,"--append", NULL);
            nn_getadn(name,filepath,r,maxlength,append);
            return 0;
        }
        if (strcmp(argv[0], "/exit") == 0) {
            return 2;
        }
        return 1;
    }





    /**********************\
    *                      *
    *       COMMANDS       *
    *                      *
    \**********************/

    void nn_create(char *name, int *structure, int length) {
        if (neural_network_objects_is_exist(&obj,name)) {
            fprintf(stderr, "Err: \"%s\" Neural Network already exist\n",name);
            exit(1);
        }
        neural_network_objects_insert(&obj,name,structure,length);
    }
    void nn_delete(char *name) {
        neural_network_objects_remove(&obj,name);
    }
    void nn_evaluate(char *name, double *input, int length, char *filepath, int use_stdout, int append) {
        neural_network_object test = neural_network_objects_get(&obj,name);
        matrix output = neural_network_forward(test.nn,input,length);
       
        if (use_stdout == 1) {
            matrix_print(output);
        } else {
            FILE *f = NULL;
            if (append == 0) {
                f = fopen(filepath, "a");
            } else {
                f = fopen(filepath, "w");
            }
            matrix_fprint(output,f);
            fclose(f);
        }
           
        matrix_free(output);
    }
    void nn_export(char *name, char *filepath, int use_stdout, int is_binary, int append) {
        neural_network_object test = neural_network_objects_get(&obj,name);
       
        if (use_stdout == 1) {
            printf("%d%d-%s",is_binary,strlen(name),name);
            neural_network_export(test.nn,NULL,0);
        } else {
            FILE *f = NULL;
            if (append == 0) {
                f = fopen(filepath, "ab");
            } else {
                f = fopen(filepath, "wb");
            }
            if (f == NULL) {
                printf("Err: opening file '%s'\n",filepath);
                exit(1);
            }
            if (fseek(f, 0, SEEK_END) != 0) {
                printf("Err: reading file '%s'\n",filepath);
                exit(1);
            }
            unsigned long isempty = (unsigned long)ftell(f);
            if (isempty == -1) {
                printf("Err: reading file '%s'\n",filepath);
                exit(1);
            }
            if (isempty > 0) {
                fprintf(f,"\n");
            }
            if (fseek(f, 0, SEEK_CUR) != 0) {
                printf("Err: reading file '%s'\n",filepath);
                exit(1);
            }
            if (is_binary == 1) {
                char b = '1';
                int len = strlen(name);
                fwrite(&b,sizeof(char),1,f);
                fwrite(&len,sizeof(int),1,f);
                fwrite(name,sizeof(char),len,f);
            } else {
                fprintf(f,"%d%d-%s",is_binary,strlen(name),name);
            }
           
            neural_network_export(test.nn,f,is_binary);
            fclose(f);
        }
    }
    void nn_import(char *name, char *new_name, char *d, int is_string) {
        if (neural_network_objects_is_exist(&obj,new_name)) {
            fprintf(stderr, "Err: \"%s\" Neural Network already exist\n",new_name);
            exit(1);
        }
        if (is_string == 1) {
            int success;
            neural_network nn = neural_network_import(d,&success);
            if (success == 1) {
                neural_network_objects_insertnn(&obj,new_name,nn);
            } else {
                fprintf(stderr,"Err: Can't find data or bad data\n");
                exit(1);
            }
        } else {
            FILE *f = fopen(d, "rb");
            if (f == NULL) {
                printf("Err: opening file '%s'\n",d);
                exit(1);
            }
           
            int c;
           
            while (1) {
               
               
                c = fgetc(f);
               
                // end of file
                if (c == EOF) break;
               
                // trim
                if (isspace(c)) {
                    continue;
                }
               
                // binary or string ?
                if ((char)c == '1') {
                    // binary
                   
                    // get name
                    int len = 0;
                    fread(&len, sizeof(len), 1, f);
                   
                    int tempsize = 2;
                    int tempindex = 0;
                    char *_name = (char*)malloc(tempsize*sizeof(char));
                    if (_name == NULL) bad_memory_allocation();
                   
                    for (int i = 0; i < len; i++) {
                        char tempc;
                        fread(&tempc, sizeof(char), 1, f);
                        putcharline(tempc,&_name,&tempindex,&tempsize);
                    }
                   
                    if (strcmp(name, _name) == 0) {
                       
                        char *data = neural_network_importbinary(f);
                       
                        if (data == NULL) {
                            free(_name);
                            nextline(f);
                            continue;
                        }
                       
                        int success;
                        neural_network nn = neural_network_import(data,&success);
                        if (success == 1) {
                            neural_network_objects_insertnn(&obj,new_name,nn);
                        } else {
                            fprintf(stderr,"Err: Can't find data or bad data\n");
                            exit(1);
                        }
                       
                        free(_name);
                        free(data);
                       
                        break;
                    }
                   
                   
                    free(_name);
                   
                   
                } else if ((char)c == '0') {
                   
                    int tempsize = 2;
                    char *templen = (char *)malloc(tempsize * sizeof(char));
                    if (templen == NULL) bad_memory_allocation();
                    int tempindex = 0;
                    int tempc = fgetc(f);
                    int _g = 0;
                    while (tempc != '-' && tempc != EOF) {
                        putcharline((char)tempc,&templen,&tempindex,&tempsize);
                        tempc = fgetc(f);
                        if (tempc == EOF) {
                            free(templen);
                            nextline(f);
                            _g = 1;
                            break;
                        }
                    }
                    if (_g == 1) continue;
                    int len = atoi(templen);
                    free(templen);
                   
                    tempsize = 2;
                    tempindex = 0;
                    templen = (char *)malloc(tempsize * sizeof(char));
                    if (templen == NULL) bad_memory_allocation();
                    for (int i = 0; i < len; i++) {
                        tempc = fgetc(f);
                        if (tempc == EOF) {
                            free(templen);
                            nextline(f);
                            _g = 1;
                            break;
                        }
                        putcharline((char)tempc,&templen,&tempindex,&tempsize);
                    }
                    if (_g == 1) continue;
                   
                    if (strcmp(name, templen) == 0) {
                       
                        tempsize = 2;
                        tempindex = 0;
                        char *data = (char *)malloc(tempsize * sizeof(char));
                        if (data == NULL) bad_memory_allocation();
                        tempc = fgetc(f);
                        while (tempc != '\n' && tempc != EOF) {
                            putcharline((char)tempc,&data,&tempindex,&tempsize);
                            tempc = fgetc(f);
                        }
                       
                        int success;
                        neural_network nn = neural_network_import(data,&success);
                        if (success == 1) {
                            neural_network_objects_insertnn(&obj,new_name,nn);
                        } else {
                            fprintf(stderr,"Err: Can't find data or bad data\n");
                            exit(1);
                        }
                       
                        free(templen);
                        free(data);
                       
                        break;
                       
                    }
                   
                    free(templen);
                   
                } else {
                    nextline(f);
                    continue;
                }
               
            }
           
            fclose(f);
        }
    }
    void nn_train(char *name, double *input, int inputlength, double *output, int outputlength, int times) {

        neural_network_object test = neural_network_objects_get(&obj,name);

        if (inputlength % test.nn.weights[0].rows != 0) {
            fprintf(stderr,"Err: Bad dimension input\n");
            exit(1);
        }
        if (outputlength % test.nn.weights[test.nn.size-1].cols != 0) {
            fprintf(stderr,"Err: Bad dimension output\n");
            exit(1);
        }
       
        if (inputlength / test.nn.weights[0].rows != outputlength / test.nn.weights[test.nn.size-1].cols) {
            fprintf(stderr,"Err: The number of inputs is different from the number of outputs\n");
            exit(1);
        }
       
        int max = inputlength / test.nn.weights[0].rows;
       
        for (int i = 0; i < times; i++) {
            double r = ((double)rand()/(double)RAND_MAX) * max;
           
            double *arrinput = (double *)malloc(test.nn.weights[0].rows * sizeof(double));
            if (arrinput == NULL) bad_memory_allocation();
            for (int j = 0; j < test.nn.weights[0].rows; j++) {
                arrinput[j] = input[(int)r*test.nn.weights[0].rows+j];
            }
            double *arroutput = (double *)malloc(test.nn.weights[test.nn.size-1].cols * sizeof(double));
            if (arroutput == NULL) bad_memory_allocation();
            for (int j = 0; j < test.nn.weights[test.nn.size-1].cols; j++) {
                arroutput[j] = output[(int)r*test.nn.weights[test.nn.size-1].cols+j];
            }
           
            neural_network_train(test.nn,arrinput,test.nn.weights[0].rows,arroutput,test.nn.weights[test.nn.size-1].cols);
           
            free(arrinput);
            free(arroutput);
           
        }
    }
    void nn_merge(char *name_a, char *name_b, char *name_c, int power_mixing) {
       
       
        neural_network_object a = neural_network_objects_get(&obj,name_a);
        neural_network_object b = neural_network_objects_get(&obj,name_b);
       
        if (neural_network_objects_is_exist(&obj,name_c)) {
            fprintf(stderr, "Err: \"%s\" Neural Network already exist\n",name_c);
            exit(1);
        }
       
        int cut_count = power_mixing;
       
        // nn chromosome a
        int chromosome_a_weights_index = 0;
        int chromosome_a_weights_length = 2;
        char *chromosome_a_weights = (char *)malloc(chromosome_a_weights_length*sizeof(char));
        if (chromosome_a_weights == NULL) bad_memory_allocation();
        int chromosome_a_bias_index = 0;
        int chromosome_a_bias_length = 2;
        char *chromosome_a_bias = (char *)malloc(chromosome_a_bias_length*sizeof(char));
        if (chromosome_a_bias == NULL) bad_memory_allocation();
       
        // nn chromosome b
        int chromosome_b_weights_index = 0;
        int chromosome_b_weights_length = 2;
        char *chromosome_b_weights = (char *)malloc(chromosome_b_weights_length*sizeof(char));
        if (chromosome_b_weights == NULL) bad_memory_allocation();
        int chromosome_b_bias_index = 0;
        int chromosome_b_bias_length = 2;
        char *chromosome_b_bias = (char *)malloc(chromosome_b_bias_length*sizeof(char));
        if (chromosome_b_bias == NULL) bad_memory_allocation();
       
       
        // generate chromosomes
        for (int i = 0; i < a.nn.size; i++) {
           
            // generate weights chromosome
            for (int j = 0; j < a.nn.weights[i].rows; j++) {
                for (int k = 0; k < a.nn.weights[i].cols; k++) {
                    char *s = doubleToBinary(a.nn.weights[i].data[j][k]);
                    int len = sizeof(double)*8;
                    for (int c = 0; c < len; c++) {
                        putcharline(s[c],&chromosome_a_weights,&chromosome_a_weights_index,&chromosome_a_weights_length);
                    }
                    free(s);
                    s = doubleToBinary(b.nn.weights[i].data[j][k]);
                    for (int c = 0; c < len; c++) {
                        putcharline(s[c],&chromosome_b_weights,&chromosome_b_weights_index,&chromosome_b_weights_length);
                    }
                    free(s);
                }
            }
           
            // generate bias chromosome
            for (int j = 0; j < a.nn.bias[i].rows; j++) {
                for (int k = 0; k < a.nn.bias[i].cols; k++) {
                    char *s = doubleToBinary(a.nn.bias[i].data[j][k]);
                    int len = sizeof(double)*8;
                    for (int c = 0; c < len; c++) {
                        putcharline(s[c],&chromosome_a_bias,&chromosome_a_bias_index,&chromosome_a_bias_length);
                    }
                    free(s);
                    s = doubleToBinary(b.nn.bias[i].data[j][k]);
                    for (int c = 0; c < len; c++) {
                        putcharline(s[c],&chromosome_b_bias,&chromosome_b_bias_index,&chromosome_b_bias_length);
                    }
                    free(s);
                }
            }
           
        }
       
        double factor = (double)chromosome_a_bias_length / (double)chromosome_a_weights_length;
       
        // perform a uniform cut
        int cut_count_weights = floor((double)cut_count * ((double)1.0-factor));
        int cut_count_bias = ceil((double)cut_count * factor);
       
        // cut at random position
        int *cut_weights_list = (int *)malloc(cut_count_weights*sizeof(int));
        if (cut_weights_list == NULL) bad_memory_allocation();
        int *cut_bias_list = (int *)malloc(cut_count_bias*sizeof(int));
        if (cut_bias_list == NULL) bad_memory_allocation();
        for (int i = 0; i < cut_count_weights; i++) {
            double r = ((double)rand()/(double)RAND_MAX) * (double)chromosome_a_weights_index;
            cut_weights_list[i] = (int)r;
        }
        for (int i = 0; i < cut_count_bias; i++) {
            double r = ((double)rand()/(double)RAND_MAX) * (double)chromosome_a_bias_index;
            cut_bias_list[i] = (int)r;
        }
       
        // nn chromosome c
        int chromosome_c_weights_index = 0;
        int chromosome_c_weights_length = 2;
        char *chromosome_c_weights = (char *)malloc(chromosome_c_weights_length*sizeof(char));
        if (chromosome_c_weights == NULL) bad_memory_allocation();
        int chromosome_c_bias_index = 0;
        int chromosome_c_bias_length = 2;
        char *chromosome_c_bias = (char *)malloc(chromosome_c_bias_length*sizeof(char));
        if (chromosome_c_bias == NULL) bad_memory_allocation();
       
       
        int t = 0;
        for (int i = 0; i < chromosome_a_weights_index; i++) {
            if (t == 0) {
                putcharline(chromosome_a_weights[i],&chromosome_c_weights,&chromosome_c_weights_index,&chromosome_c_weights_length);
            } else {
                putcharline(chromosome_b_weights[i],&chromosome_c_weights,&chromosome_c_weights_index,&chromosome_c_weights_length);
            }
            for (int j = 0; j < cut_count_weights; j++) {
                if (i == cut_weights_list[j]) {
                    t = t == 0 ? 1 : 0;
                }
            }
        }
       
        t = 0;
        for (int i = 0; i < chromosome_a_bias_index; i++) {
            if (t == 0) {
                putcharline(chromosome_a_bias[i],&chromosome_c_bias,&chromosome_c_bias_index,&chromosome_c_bias_length);
            } else {
                putcharline(chromosome_b_bias[i],&chromosome_c_bias,&chromosome_c_bias_index,&chromosome_c_bias_length);
            }
            for (int j = 0; j < cut_count_bias; j++) {
                if (i == cut_bias_list[j]) {
                    t = t == 0 ? 1 : 0;
                }
            }
        }
       
        // create the neural network result
        int *structure = (int *)malloc((a.nn.size+2)*sizeof(int));
        if (structure == NULL) bad_memory_allocation();
        for (int i = 0; i < a.nn.size; i++) {
            structure[i] = a.nn.weights[i].rows;
        }
        structure[a.nn.size] = a.nn.weights[a.nn.size-1].cols;
        structure[a.nn.size+1] = 0;
        structure[a.nn.size+2] = 0;
       
        nn_create(name_c, structure, a.nn.size+1);
       
       
        neural_network_object c = neural_network_objects_get(&obj,name_c);
       
       
        int g_index = 0;
        int h_index = 0;
        for (int i = 0; i < c.nn.size; i++) {
            for (int j = 0; j < c.nn.weights[i].rows; j++) {
                for (int k = 0; k < c.nn.weights[i].cols; k++) {
                    char *s = (char *)malloc(sizeof(double)*8+1);
                    if (s == NULL) bad_memory_allocation();
                    int s_index = 0;
                    int s_length = sizeof(double)*8;
                    for (int h = 0; h < sizeof(double)*8; h++) {
                        putcharline(chromosome_c_weights[g_index],&s,&s_index,&s_length);
                        g_index++;
                    }
                    double r = binaryToDouble(s);
                    c.nn.weights[i].data[j][k] = r;
                    free(s);
                }
            }
            for (int j = 0; j < c.nn.bias[i].rows; j++) {
                for (int k = 0; k < c.nn.bias[i].cols; k++) {
                    char *s = (char *)malloc(sizeof(double)*8+1);
                    if (s == NULL) bad_memory_allocation();
                    int s_index = 0;
                    int s_length = sizeof(double)*8;
                    for (int h = 0; h < sizeof(double)*8; h++) {
                        putcharline(chromosome_c_bias[h_index],&s,&s_index,&s_length);
                        h_index++;
                    }
                    double r = binaryToDouble(s);
                    c.nn.bias[i].data[j][k] = r;
                    free(s);
                }
            }
        }
       
        free(structure);
       
        free(cut_weights_list);
        free(cut_bias_list);
       
        free(chromosome_c_weights);
        free(chromosome_c_bias);
        free(chromosome_a_weights);
        free(chromosome_a_bias);
        free(chromosome_b_weights);
        free(chromosome_b_bias);
       
    }
    void nn_rename(char *name, char *new_name, int strict) {
        if (neural_network_objects_is_exist(&obj,new_name)) {
            if (strict == 1) {
                fprintf(stderr, "Err: \"%s\" Neural Network already exist\n",new_name);
                exit(1);
            } else {
                return;
            }
        }
        if (neural_network_objects_is_exist(&obj,name) == 0) {
            if (strict == 1) {
                fprintf(stderr, "Err: Unknow \"%s\" Neural Network\n",name);
                exit(1);
            } else {
                return;
            }
        }
       
        for (int i = 0; i < obj.length; i++) {
            if (strcmp(obj.nns[i].name,name) == 0) {
               
                free(obj.nns[i].name);
               
                char *temp = (char *)malloc((strlen(new_name)+1)*sizeof(char));
                if (temp == NULL) bad_memory_allocation();
                strcpy(temp, new_name);
                obj.nns[i].name = temp;
               
                return;
            }
        }
        fprintf(stderr, "Err: Unknow \"%s\" Neural Network\n",name);
        exit(1);
    }
    void nn_mutate(char *name, double rate) {
        neural_network_object test = neural_network_objects_get(&obj,name);
        for (int i = 0; i < test.nn.size; i++) {
           
            // generate weights chromosome
            for (int j = 0; j < test.nn.weights[i].rows; j++) {
                for (int k = 0; k < test.nn.weights[i].cols; k++) {
                    char *s = doubleToBinary(test.nn.weights[i].data[j][k]);
                    int len = sizeof(double)*8;
                    for (int c = 0; c < len; c++) {
                        double r = ((double)rand()/(double)RAND_MAX);
                        if (r < rate) {
                            s[c] = s[c] == '0' ? '1' : '0';
                        }
                    }
                    double nv = binaryToDouble(s);
                    test.nn.weights[i].data[j][k] = nv;
                    free(s);
                }
            }
           
            // generate bias chromosome
            for (int j = 0; j < test.nn.bias[i].rows; j++) {
                for (int k = 0; k < test.nn.bias[i].cols; k++) {
                    char *s = doubleToBinary(test.nn.bias[i].data[j][k]);
                    int len = sizeof(double)*8;
                    for (int c = 0; c < len; c++) {
                        double r = ((double)rand()/(double)RAND_MAX);
                        if (r < rate) {
                            s[c] = s[c] == '0' ? '1' : '0';
                        }
                    }
                    double nv = binaryToDouble(s);
                    test.nn.bias[i].data[j][k] = nv;
                    free(s);
                }
            }
           
        }
       
    }
    void nn_getadn(char *name, char *filepath, int use_stdout, int maxlengthline, int append) {
        neural_network_object test = neural_network_objects_get(&obj,name);
       
        int chromosome_weights_index = 0;
        int chromosome_weights_length = 2;
        char *chromosome_weights = (char *)malloc(chromosome_weights_length*sizeof(char));
        if (chromosome_weights == NULL) bad_memory_allocation();
        int chromosome_bias_index = 0;
        int chromosome_bias_length = 2;
        char *chromosome_bias = (char *)malloc(chromosome_bias_length*sizeof(char));
        if (chromosome_bias == NULL) bad_memory_allocation();
       
        // generate chromosomes
        for (int i = 0; i < test.nn.size; i++) {
           
            // generate weights chromosome
            for (int j = 0; j < test.nn.weights[i].rows; j++) {
                for (int k = 0; k < test.nn.weights[i].cols; k++) {
                    char *s = doubleToBinary(test.nn.weights[i].data[j][k]);
                    int len = sizeof(double)*8;
                    for (int c = 0; c < len; c++) {
                        putcharline(s[c],&chromosome_weights,&chromosome_weights_index,&chromosome_weights_length);
                    }
                    free(s);
                }
            }
           
            // generate bias chromosome
            for (int j = 0; j < test.nn.bias[i].rows; j++) {
                for (int k = 0; k < test.nn.bias[i].cols; k++) {
                    char *s = doubleToBinary(test.nn.bias[i].data[j][k]);
                    int len = sizeof(double)*8;
                    for (int c = 0; c < len; c++) {
                        putcharline(s[c],&chromosome_bias,&chromosome_bias_index,&chromosome_bias_length);
                    }
                    free(s);
                }
            }
           
        }
       
       
        int adn_index = 0;
        int adn_length = 2;
        int g_index = 1;
        char *adn = (char *)malloc(adn_length*sizeof(char));
        if (adn == NULL) bad_memory_allocation();
        for (int i = 0; i < chromosome_bias_index; i+=2) {
            if (chromosome_weights[i] == '0' && chromosome_weights[i+1] == '0') {
                putcharline('A',&adn,&adn_index,&adn_length);
            } else if (chromosome_weights[i] == '1' && chromosome_weights[i+1] == '0') {
                putcharline('C',&adn,&adn_index,&adn_length);
            } else if (chromosome_weights[i] == '0' && chromosome_weights[i+1] == '1') {
                putcharline('G',&adn,&adn_index,&adn_length);
            } else {
                putcharline('T',&adn,&adn_index,&adn_length);
            }
            if (maxlengthline > 0) {
                if (g_index % maxlengthline == 0) {
                    putcharline('\n',&adn,&adn_index,&adn_length);
                }
            }
            g_index++;
        }
        for (int i = 0; i < chromosome_bias_index; i+=2) {
            if (chromosome_bias[i] == '0' && chromosome_bias[i+1] == '0') {
                putcharline('A',&adn,&adn_index,&adn_length);
            } else if (chromosome_bias[i] == '1' && chromosome_bias[i+1] == '0') {
                putcharline('C',&adn,&adn_index,&adn_length);
            } else if (chromosome_bias[i] == '0' && chromosome_bias[i+1] == '1') {
                putcharline('G',&adn,&adn_index,&adn_length);
            } else {
                putcharline('T',&adn,&adn_index,&adn_length);
            }
            if (maxlengthline > 0) {
                if (g_index % maxlengthline == 0) {
                    putcharline('\n',&adn,&adn_index,&adn_length);
                }
            }
            g_index++;
        }
       
       
        if (use_stdout == 1) {
            printf("%s\n",adn);
        } else {
            FILE *f = NULL;
            if (append == 0) {
                f = fopen(filepath, "a");
            } else {
                f = fopen(filepath, "w");
            }
            if (f == NULL) {
                printf("Err: opening file '%s'\n",filepath);
                exit(1);
            }
            if (fseek(f, 0, SEEK_END) != 0) {
                printf("Err: reading file '%s'\n",filepath);
                exit(1);
            }
            unsigned long isempty = (unsigned long)ftell(f);
            if (isempty == -1) {
                printf("Err: reading file '%s'\n",filepath);
                exit(1);
            }
            if (isempty > 0) {
                fprintf(f,"\n");
            }
            if (fseek(f, 0, SEEK_CUR) != 0) {
                printf("Err: reading file '%s'\n",filepath);
                exit(1);
            }
            fprintf(f,"%s",adn);
            fclose(f);
        }
           
        free(adn);
        free(chromosome_weights);
        free(chromosome_bias);
    }





    /**********************\
    *                      *
    *         MAIN         *
    *                      *
    \**********************/

    int main(int argc, char *argv[]) {

        neural_network_objects_init(&obj);
       
        srand(time(NULL));
       
       if (_isatty(_fileno(stdin))) {
          
          int *lengths;
            char*** multiarrayargs = parse_multi_arguments(argc, argv, &lengths);
            for (int i = 0; i < lengths[0]; i++) {
                int r = process_command(lengths[i+1], multiarrayargs[i]);
                if (r == 2) {
                    parse_multi_arguments_free(multiarrayargs,lengths);
                    return 0;
                }
            }
            parse_multi_arguments_free(multiarrayargs,lengths);
          
       } else {
           
            process_pipe(argc, argv); // infinite loop while
          
       }
       
       return 0;
       
    }




  • Code:
    [download=doc_english.txt]/**
    * [Documentation]
    *
    * Usage:
    *
    *   /create <name> --structure <layer>...
    *   /delete <name>
    *   /rename <name> <new_name> [--noerr]
    *   /export <name> [--out <file>] [--append]
    *   /import <name> <new_name> (--file <file> | --string <string>)
    *   /evaluate <name> --input <double>... [--out <file>] [--normalize (<min> <max>)...] [--append]
    *   /train <name> --input <double>... --output <double>... [[--normalize (<min> <max>)...] | [--normalize-input (<min> <max>)...] [--normalize-output (<min> <max>)...]] [--loop <int>]
    *   /merge <parent_name_1> <parent_name_2> <child_name> [--mixing-power <int>]
    *   /mutate <name> [--rate <double>]
    *   /getdna <name> [--out <file>] [--append] [--maxlengthline <int>]
    *   /print <string>
    *   /printn <string>
    *   /enable [<int>]
    *   /disable [<int>]
    *   /show [<int>]
    *   /hide [<int>]
    *   /about
    *   /exit
    *   /help [<command_name>]
    *
    *
    * Commands Description:
    *
    *   "/create" - This command create a neural network.
    *        <name>              : Just the name of the neural network, this name is used as a unique ID.
    *              If a neural network already have this name, an error is returned. (failed to rename)
    *        --structure <layer> : Allows to define the structure of the neural network.
    *              This first integer is for the "input layer" while the last integer is for the "output layer"
    *              sample: --structure 2 5 1 (That create a neural network with two neurons in input layer, five
    *              neurons in hidden layer and 1 neuron in output layer
    *
    *   "/delete" - This command delete a neural network.
    *        <name> : Delete the neural network which have this name. If no neural networks have this name, no error is returned.
    *
    *   "/rename" - This command rename a neural network
    *        <name>     : the old name
    *        <new_name> : the new name. If a neural network already have this name, an error is returned. (failed to rename)
    *       Optional:
    *        --noerr : if the neural network <name> not exist or if the neural network <new_name>
    *                already exist, no error is returned if this parameter is specified
    *
    *   "/export" - This command export a neural network to a file or print it in the console
    *        <name>       : name of the neural network
    *       Optional:
    *        --out <file> : write the neural network into a file
    *        --append     : if specified, the neural network is appended to the file (need the previous argument)
    *
    *   "/import" - This command imports a neural network that was exported before
    *        <name>            : name of the neural network to import
    *        <new_name>        : new name of the neural network (can be the same as the argument "<name>")
    *        --file <file>     : file where the neural network is located
    *        --string <string> : representation of a neural network
    *
    *   "/evaluate" - This command evaluate a neural network with a specified input
    *        <name>           : name of the neural network
    *        --input <double>... : the specified input must fit the first layer of the neural network.
    *              if the structure of the neural network is like : 5 8 8 9 3 2
    *              we need to pass an input of 5 numbers like that : 0.26 0.56 0.89 0.42 0.12
    *       Optional:
    *        --out <file>            : write the result to a file (if not specified, the result is print in the console)
    *        --normalize <min> <max> : normalize each double of the "--input" parameter between <min> and <max>
    *
    *   "/train" - This command train a neural network with inputs data and outputs data
    *        <name> : name of the neural network
    *        --input <double>... : the specified input must fit the first layer of the neural network.
    *              if the structure of the neural network is like : 5 8 8 9 3 2
    *              we need to pass an input of 5 numbers like that : 0.26 0.56 0.89 0.42 0.12
    *        --output <double>... : the specified output must fit the last layer of the neural network.
    *              if the structure of the neural network is like : 5 8 8 9 3 2
    *              we need to pass an output of 2 numbers like that : 0.12 0.86
    *
    *         But for accurate training, we can pass more than 5 numbers or 2 numbers, but for a better understanding of that,
    *         the XOR example will be nice to study :)
    *
    *       Optional:
    *        --loop <int>            : number of iterations
    *                        (the neural network learn very slowly because the global weights and bias follow the gradient of partials derivatives)
    *           default: 10000
    *        --normalize (<min> <max>)... : normalize each double of the "--input" and "--output" parameters between <min> and <max>
    *        --normalize-input (<min> <max>)... : same as --normalize but normalize only the input
    *        --normalize-output (<min> <max>)... : same as --normalize but normalize only the output
    *
    *   "/merge" - This command merge two neurals networks (with same structure) into one neural network
    *        <parent_name_1>       : name of a neural network
    *        <parent_name_2>       : name of a neural network
    *        <child>               : name of the child neural network
    *       Optional:
    *        --mixing-power <int>  : the strength of mixing (not proportional to the size of neurals networks)
    *              so more neural networks are big, more the mixing power must be big to "preserve" the same mixing as little neural networks
    *           default: 15
    *
    *   "/mutate" - This command mutate a neural network (some weights and some bias are randomly mutate)
    *        <name>          : name of the neural network
    *       Optional:
    *        --rate <double> : must be between 0 and 1. And must be very small too like "0.002" (not proportional to the size of neurals networks)
    *              so more neural networks are big, more the rate must be big to "preserve" the same rate as little neural networks
    *           default: 0.001
    *
    *   "/getdna" - This command write the dna of a neural network in a file or print it in the console
    *        <name> : name of the neural network
    *       Optional:
    *        --out <file>          : write the dna to a file (if not specified, the dna is print in the console)
    *        --maxlengthline <int> : max line length (-1 for infini), add '\n' every --maxlengthline character(s)
    *           default: -1
    *
    *         This command was just made for fun and this command will actually be useless.
    *
    *
    *   "/print" - This command print a string
    *        <string> : a simple string
    *
    *   "/printn" - This command print a string without new line character
    *        <string> : a simple string
    *
    *   "/enable" - This command enable the engine that execute command (ENABLED BY DEFAULT)
    *       Optional:
    *        <int> : if the engine was disabled with an optional argument,
    *              this argument must be specified and be the same when
    *              engine was disabled else the engine isn't enabled.
    *
    *   "/disable" - This command disable the engine that execute command
    *       Optional:
    *        <int> : this disable the engine with a "key" and others commands
    *              like "enable", "show" and "hide" must be have the same argument to work
    *
    *   "/show" - This command enable the display (only for piped commands)
    *       Optional:
    *        <int> : if the engine was disabled with an optional argument,
    *              this argument must be specified and be the same when
    *              engine was disabled else the display isn't enabled.
    *
    *   "/hide" - This command disable the display (only for piped commands)
    *       Optional:
    *        <int> : if the engine was disabled with an optional argument,
    *              this argument must be specified and be the same when
    *              engine was disabled else the display isn't disabled.
    *
    *
    *
    *
    * Example:
    *
    *   In this simple example, we create a neuronal network to compute a XOR logic gate.
    *   First, we need to create the neural network :
    *
    *         /create "XOR Gate" --structure 2 10 1
    *
    *   A XOR Gate work like that :
    *
    *         0 0 => 0
    *         1 0 => 1
    *         0 1 => 1
    *         1 1 => 0
    *
    *  So we need an input layer of 2 neurons and an output layer of 1 neuron.
    *  To compute the XOR Gate, we need a hidden layer too because we can't compute XOR Gate with a simple perceptron.
    *
    *  So now, we need to train our neural network like that :
    *
    *         /train "XOR Gate" --input 0 0 1 0 0 1 1 1 --output 0 1 1 0 --loop 10000
    *
    *  This syntax is equivalent to :
    *
    *         /train "XOR Gate" --input 0 0 --output 0 --loop 10000
    *         /train "XOR Gate" --input 1 0 --output 1 --loop 10000
    *         /train "XOR Gate" --input 0 1 --output 1 --loop 10000
    *         /train "XOR Gate" --input 1 1 --output 0 --loop 10000
    *
    *  It's recommanded to use this syntax : /train "XOR Gate" --input 0 0 1 0 0 1 1 1 --output 0 1 1 0 --loop 10000
    *  Because this syntax work a little bit different than the other and this syntax is more accurate and more performant.
    *
    *  Then, we can test our neural network like that :
    *
    *         /evaluate "XOR Gate" --input 1 0
    *
    *  This command write result in the stdout, so we can for instance get the result in a file like that :
    *
    *         /evaluate "XOR Gate" --input 1 0 > "my result.txt"         (Batch langage)
    *
    *  If the command is piped (the result can't be redirect to a file), we can also specified the --out parameter like that :
    *
    *         /evaluate "XOR Gate" --input 1 0 --out "my result.txt"
    *
    *  After some manipulations, we can save our neural network by doing that :
    *
    *         /export "XOR Gate" > "my neural network xor gate.txt"
    *     OR:
    *         /export "XOR Gate" -out "my neural network xor gate.txt"
    *
    *  We can import our neural network later :
    *
    *         /import "XOR Gate" --file "my neural network xor gate.txt"
    *
    *  Just for the fun, we can print his dna by doing that :
    *
    *         /getdna "XOR Gate" > "xor_dna.txt"
    *     OR:
    *         /getdna "XOR Gate" --out "xor_dna.txt"
    *
    *  With the command "/merge" and "/mutate", we can easly create a genetic algorithm
    *
    **/




  • Voici le code de génération :
    Code:
    [scroll]:: NeuralNetwork - copyright (c) 2020 Flammrock
    :: This is licensed under MIT license.
    for %%b in (
    4D53434600000000E94A0000000000002C000000000000000301010001000000
    000000004E0000000200010000DE0000000000000000D150F3B120006E657572
    616C4E6574776F726B2E6578650021D08973792D0080434BAC5B0F785355964F
    FA0702469B62C7ED2A4EABC3E70A2A83C28CFF9DA246D121F8CAD8911D5D1795
    AEE24041287F7444286DA0E135506780A90A2B2A3875A69FB2B35553B68B0DD4
    B5605D0B54A76882013A7263227435034103DD73CFB92FEFBEE4A5AD7ED32F49
    DFBBF7DCF3E777CE3DF7BE7BEF73FDAADE926DB15872E0DBDF6FB1F82CF45762
    19FC6F057CCF2BDA719EA579C40797F8AC533FB8E4DEC7662F2C9EBF60DEA30B
    1E9A5BFCC8431515F32A8B1F2E2F5EB0A8A2787645F1EDF7FCA278EEBC59E5E3
    CF3D77E418C143715A2C53AD5906BE214B5EF639D661164B13DC4CA0B21EAE98
    03BE6D423B7E9D457A5B2CFA7F4B1F95FF53CC827621A143FE9FFC47D7518BE5
    F221D8FA5DFFAE8B0A5533FC8DAF2C5F5AC96DFD9350A889F496FF8A2D9699E3
    673D54F9105CD7FE58D8CEF178DD485732B0A801FFCA3C7FAD8AF28B7A35F7BA
    6889A5BAADA4C5D256622955D8EC3F5A2CD5ED369F95BB893D8277397B769BB5
    98412DAE152DB2B0C5D58616DC41F5185D1E6742753A54DB64D559A03AAEDF97
    57772394F21B9BFFB0D563539D76D555A0E6DF3A5DB595B2AF5F439ED828511D
    B7E4B973819A2DEEEFEF47C28468E050F3559B755FDE4EA7ADD6AA95DCCA8B50
    282F03999EBD933D36A1911D35CAD2344A286C1DCACAF138E3AA33AECB1A09B2
    84E631E402ECAD1D793B6D5CD8F5FEBC9A5B507FAC28807BEF78FD1E08BDAE44
    A9D715836FBC141A95B2E8D9FEFEEA7687AE50E435A9F5DA0BB0755C61365207
    4587CFD578620BA8175638D2ACD8D3686AC5ED67752BC4FF3E153E2EDBF51D79
    DE3B534CB8766013C6A69A009C3CFE126019F993A92D37354AB66453A91C19AA
    D3A6B0224D73F2757F5ECDC54809D723F36AF2F11AC8E27FD0C8227F49E1C1CD
    ADB9920841D1EAD3168D870DF57E979A2651098FB4889C17FE11FC78013F14B1
    0DE96CC11E104E7C90286F7D9B105888B04FC6409FF10783DA1AE0071200B88D
    537881A5D71953D82F91D2CEC9783FE37169F3FD2CE90BDEB66E89EE6D80B4BA
    AD9F837A04A95457DFD80E0879AB9FD7B9A22D9CFB73A74E836B4411D38AB0D7
    7AF61D08A92E96F7D6884BFC08856E324F23617EEBBB1A19476BAD91B5A097C1
    3CAE608B08158F2B2661CAC3253C828005AFDB2247515BE8BF7E56A496C5FC47
    AC23BAD4321B68E47146F3766655B309507D1AEA987597EA8CE67778CAFA2012
    233B3588FAE00B8132F1558428980C0F8B702DFA2FF7D5A4FF6ACE21F11876B6
    48102ED302EAF836248718B7FA3D2E740FC73381BC5AB6E9BC78FF8984A05EF3
    8ACECD92E436B79F077D8EC236D285DC0383B9B5900303E5B6AA28B727F8C018
    0B5BBA8D82A82C1EEC09E6D6F37A673C58D6C7EF1AF8DDA23E594630770BB108
    B85265731DBDE791BD2D27417ADE1B7E309A6BF569CE185045923A92A44A2596
    B492A35BB0A4C43C3727444EE0E36298504E20CA89C811B8F487B220B942C059
    1556BE1531E4E83B1476DF5639E38F26E5ED40AFB0495B451F81288AE97DE48D
    D3C9A44482ED5CF03D0308C64877F843D98AC7D5C782AF0807F779AC32855517
    31154444DE4A17F2CF264204154F9C6022A7BA5AA78A23553C459538A8A2E65B
    7705737DE0BDE0A8604FA43BD204667B9CBDA0A8C7C9C060E803DC737CC4811C
    DA0B4365252A6E3742E540A846BC620A554D3C052AC790A02AD0A0FAF3CB8342
    75493C0D2A1452311854059CEA8EA142858C0B2084F9BDCBAE5559FD412BE2F7
    4EE4F7881CE0579009BF492FCBF81563005BB5213A8638FA5F12386AA9958CBC
    E9540A8EB1C170EC93422ECA1E7F49E018D571EC33E278F0A484A3144C134C11
    822138EE3F9CAD3AAC7B05461C1381C547687D48C3A30FF14822110224C6BD24
    23E1A04882B91430E0A9263DF03F1952E0BF61A22B9FEF54C7AFC873DF4DADD1
    F602432FC80A3AFB0EEE3974F0B3D0C976679E7BF400845CBBF06D03721A3A9F
    5F42520C5F013FD0CA10146959AD6630F311A40706086687AC80A72CC6DE7811
    46304AC4C132C6D37B1BCFE4CB58B02C0477CED181F282802B1400B738634107
    B8F5B54895DCF56932EA09697EB5815FA7BF887EF55D81738C229A704939BCE8
    05CCE10ADB7F1A4723887C3336E7083657219B9E33A96C7A9E176C548D0D7497
    188434A0B09600E0DD2F3C4CCC489CD17448B3AC4381F4DDA1421ACC92F4BB4E
    D32F1217FA31C11ABA431466377C46D4CBA7ACC53AFF5EE4DF1B09FA3AC0110A
    BBE0DF6998FA6B0679817357009D2E55CD5DCADBF9F6F25FE84D0A7B6033E750
    102EE1CF02FB92C54E2CB685B3A1783087566D961D3A2D91EA8969CF094BADF1
    811C7AC366D9A1C3D3D80CD7D8EC38F5BD1DFACD90FA88FF7B39F4D106A1DF85
    A7BE8F430FA0439D9BBE9F433FC2D6C3B1B5237C3B7766008B62D89B72C2E7E8
    8E9CC41D0929CF5E1DAFCC73FF981E88D0ED5E67486185A8BEDD67A7E96B08BE
    51856D423EF6F0387AA4557813AFB35B61E19306F26E229F4BE4060F4439B63B
    07F30077A7F777E6499A34BE51CA9929E36F1C0B0B7921D4677BCABA0836D5D9
    C5D353F8A7999B26F4A6716CDAA935EDC4A60B7822BE139FCE93C35414C2B6FA
    797998BA5C4C78001F3BC1B9E76F6670BA9ECF808F3A247C660E7D4A5208CF55
    FEC3599EB2805A560023318EC91E678F9A65ED08CA91E4ECE9EF82F8559D01CC
    E4DBC24FA51BBBF139D9D8197A208DCF733F22163B5CA341C39ACBF85D721DE5
    10ADA39C6890575EC20D34E170168B6802B45E8899A1554462C38B3511C520C2
    7DBE78944781F4D47B852C356FA7CB5EDA62F1A3E8CA06029C445768A2A5402E
    8E9905F2D6860C8E1A9E35144705AC990219661B33BE433476C9E56352A294C7
    A825908CF287412809B9EABB09E9C820A49DCADB41C8A5AAB303859CB16842C6
    7D37216D1984B452792B0819A93ADB50484352080418EF1AC1DCCBC19FC1B2ED
    C909203C92348989FA34BD573421D84D7AAF40059A925A35A54A6FA4F2462E3D
    E0DC8E12226F9B3F1141354C61C37CAD287CF64C5A277962A3DC496C62E28AB3
    E88B50393BAEC5F826F87982FE055187A9FB1456C7AD79EE9B4592A568F5E568
    6B2D6C3C123BD2EBEC0A1B4575E10B6519D7FAB5117DE106AC4E1F13BF1AD298
    F89E69C621AB2649DECF383EFA6E4173676D20731748E64ECCD45EB2131753A6
    B3ABC88AF0C421897426AD5FB49EDADDC2B3F8F9FA583847E4EB29E840787453
    D8C3EB4D57BD8261EE6899F0FAF5A6EB91CD44A8D3E5AF377DEEAD0B279FD7FA
    38639E31607210D13386B6F278182E3576DA2A19B7BCCCCE21EA2B816B8F2B44
    4F9C5645CD8278E8559872C290CF7A010848B0DDBF132B7512C738722C49B2CC
    12A0F324E87175B16127523223241A85AD31E104BA057315BF3C4309E6CE30DC
    6B53BF1DFCDFF45210D0C97E46ACD8C55F7341A353950BE6CE4C61F9D8202C3B
    58F4B7C472EF57E92CA1378BA81266B642131F2B3A6E30B315CCF429EC95DFA6
    9B19E33ACD4FD169E9203A35B31942A79F924EE10BCF1A43CA44B34668B895B9
    BF3468D6089A6D55D8084DB384C7B99EF3F1381B78EF746EC299F0169E961C94
    96D6435AFAE4D9B4B494FE0C5D6EBE5CA0F85660EF7D1579C01C21A1EB6CB3FA
    D38D8DB35F9338D6F635E95E870CCA9FA5F9ED16A1469CFA1F9F018C06CB98C2
    728CC1C66881F9E26785AD8542D66192A594C2EC99F5D793ACE1516C9B46E52D
    8B4D07C26EF63111FA1E21E63D0AFB0247FD024D5C0F28027381D705DD2D3F28
    017CBA8C3E81A088450C4AFAE0DBAEB0A5F5F25CA159850FAD3994E724516D46
    549B39AACD253A888566206E679709C3C6478461CD983FCE589195DEDC6ED67C
    3D3BB60E9BA73E6DC5201A3AA9CA379E2C6850D8589CC8176B3635C017826CEB
    3A01BC24ADC0449A77292455B6D85CE0561078FF3AE929D1EB7E1AC815F6E6DF
    649954E875D62AEC2A7346EDC0681455B10BBF10987429A964B54016598B645E
    F71C94F4A0C1CF54884EDBBD563370207778CBDAA783910F70239F5D2B07917B
    26F27FF92B037F2C54DD0F60DDCCB5A6C6702D9D54C55C6134C60CB84B05C938
    22099F9F3504E77BF9F33C3BE6CD8862A757F6BFFB7654F4B298C11D58E8752B
    58F78277888130864B9E6B2AD9CB5981F07BBC8658B81C05BCF6B541F8E5247C
    12D6FD30B321D954C5CE6199C2C13B89A41EAC131151883CA71A3D56988C88FF
    A8FB0E1161E3D62EAF3344841DF9D7F719F8DB29226C5837ADCE141EA1E804AA
    65D71D330B0A0DC491826A145185B77CDBDF6FD6D38FA844F7C5E702209B0C5F
    BB2A8F0A4F5BF9A85085590F4685310A4FCD389B981B4E994DF08D2DB64A1550
    8D06CA4285966941BC435FD6798448BCCE4E18B1BEC1814F63D2492978B26AEA
    DC1E687C9979551F5FFF53A5E522AFB359618FFD9F218135C3B74D6147D768A3
    A46226A17D8D506F93C26E30780C4AF850CB5E5E63AA4323345E6D5ED5065573
    A8CA378E78D5286C4F9F41BD1AF8D62BEC767316F5C062EC1A4354DD8B9173C3
    714354DD4BBD847250DC939157D063E07513D21FFFD2C0EB26E23505EB9A3CD2
    58E6BE8E2F05D02FCD13CEC014368C1BD758CA0735BC881CE44714BCFC726CC7
    94812709DED1BCEFDC2604559212C528FD5EA366C5A4D9955837CAD44AEF95D4
    2B62B5A6B553A8F6E35A43D67320C309270C89C741C268D8D8563B90B0D554CB
    D61D35EDA842E843826AF651D101CD00C2C57F3D4475902C1CA41FD68AD49583
    5A3D6B842787345E8A755FAF36D57829E9F29179ED1CAA7D7BB51CB4EED859CE
    30FFB8011E2CD4A264D580EC1EA35A36FF88293C42A55B05D5DD470686271D9A
    5E508565AF36C40F439D0B220680D859297EF6AF32D5792669D3BC6A20876F5C
    65889F4E14D6173500D449C27AB0AE7C95C83E193A05352AF5BADB38B9C7DBCA
    4D9A3048235EB1012BBAA6ECA08A5D6A99CDBA4BD3DAE36D023E9AEE5C13806B
    4323E7DDED36B14FF536220D7BCB6D1898B7A30D0EA37D58A8BA9BB0CEED3685
    AB8DD83D4AB56C5EC834004831365950DD654E25A0BF4850FD2834409838D211
    DDCAAD3E543300A28E4C883A6444D74B8852AC7836D473DE2B6B4C11AD27E366
    D518106D40D4767E6140B481105D8F75136B0642F40735E249F93353AC44109F
    A826AA538706CA4B7BAA0D63420D8AFFE498A1E7D49C95C697066A10B67D9336
    D3D0782E1292971D1A48BF7B04D57DA65438BC5F2948269993F0A17B8420C937
    27E1A3F0E72B89E478D094848FE1BB05C9FBE6247CF0DC22481A8944CC950ACC
    9FA05F377F821EF479F94621E4B6A088EF842142D39BC4984D34719834314B98
    AE3EF6511535090668D2781F1D371CA45D946D16EDB605B44924F4DB2AB188EE
    50D84FAA920BEA6C6C957C5AA6909EF00BF1C1F51FC4DAA4EF3F7135E0BE2ADA
    65A4CDF2D1B81878092D0CA6AF6F5E53454B7817683CDEA6D5BD4285FD7E45FA
    223D3AC3350467E893D774A7BC4A8CD935D8610B223EFDB8642C5D440C45C4B8
    88584A7AC924A28FDD26447C18368A88E2A62117B14E12114511512E229A3279
    C82482B1D07212314713215C33D518C1B614CC130AFB9FE5B4689C955C0E86F1
    7AB9E972E67F77CBCB9909C5F75FDC3D58227ED90AE246B695C07CF519C189A9
    4EA6735AD8CD63325AA2BA12A52D3B71FB46E52B439397CB2BD8BD2A7C682BE6
    9AF4FDD42358CBAC7E7EF68ECEE0C594BC9D5685DDD98D1BCB9103C87319A920
    53890DE8A26E711234D1924D3803E390101E50E1E30A71E197668B83967C4F2D
    0D4378BC59F98CAC76773A4DB7C21E7C46ACEBF7283E7F1A6E539E31E2D6B6CC
    14B71B0F70DC80433B750C30EF8267E48ED1050FC7E9907521645D1CB22E73C8
    0A0EC890DDBD2C1364D1FD02B26EC5F75E9A152B9719ADF8E669532B16EDE756
    0087BD492B9465B2159D2A7CD2ACE8442B3AB9159DE6564CDB2F5B51F374262B
    2E233A21AD43858FAB874B1BA7AFCC77A0B48EE44649BB0A1F5737A7EAD7756A
    47AAF6C891606E17EE5AF9827288D892CB89FF8ACA38824E9FB479D193866009
    92152411F4FDC614C189FBA8FF18D8B51CD23B521CDB1518D62953F194D729CD
    F13CDB25E3398974090FE31B2C9FE2C276005904C25D097E1793378D71FC18AB
    A91FD7B36A1CF92348E48D71FB508AB41B83C77C3982E1B77330B12B6C1FBA8C
    4E3FD3F000852F3F4527B391B44E23B5FF463F3C07B7CB9ED28F6FEB2746A483
    1B74482DA4BAE21CA0127D213784D685384071DCA996945ED49559E92C4D93D1
    664A7FF9A4A4F45FB245E97D4F09A50378DBF1A4186CD304273ECC2CD8A3717B
    659F89E04A59F0831AE9A12785E01EBCBD4713CC97C903B4BE12EE362C0CB5E1
    BA0E9BFBA4DC655B55F8B802789852DF366F45045B939DC8A7C287BADA453A95
    0FA97C49AA66F17FBBC17C7C206367FE5798DFACAD94735852A7749D30A5FB60
    290D891F1D20EDD1EC47B3E8507F926B3E729DAB810A22B70FC87589E0BA5CE6
    9A4FEBB50ABBA24BC7FDFA0C1C7E2238DC7C405BF057D8C125926B76E3D2FF76
    85053F1C9CDBB125C4EDC47EC10DDAA932B74556A1DB662AAD6ECB87BC327389
    F03ADF5BD7F3CAA79DBC1F3703344D0AEB5C4247397A294B43C9AE25626F683B
    DDFF59BB6FC55DDA6E3E47F1E91BF08AA7AC916D5C2202AA916FF763B3C86251
    045A7DA65D83DE07E83AFCC5194830EF9C495FEBE4069F5C4C069FDD270C161D
    17BB417825FF29C4A86254BE56EF1E8F5339461BE4D368753C2BCF5BA411F3C3
    1CEE7C4B72A867C9A91866023C7D1FC23907675BA7B33D5F9E9684DFFD564C92
    4432DC2AB69FE96504FE4E8AC23EFE4638D6AA9D17486EE3F107A3F3C9C4F0C6
    5C9A06B3238BB45140CF5A4C6CF8DAB5C470F47D11C37CE850D8B0C5A24D086F
    972C123887F45DEBFB17E93927A007C1C77B791048741317E929A247A77B9DE8
    4A74C2911A214C81BA7542F7DEE4BCB14B1BE1F5B1B7571B5B194F0BF7E7A48F
    AD646CBB6CECC9BD0663FF5113DC86B76B2A85B169ED761ADB1DAE14ED5AF176
    BAD6AE0B95BF95A61DFA243CC0039C6F1AF314B81D26DE4DEC830F0DB951748C
    D24AF2A04AD32F74741AB71E99DB16E0B6892D3772DB02959B14962DB8DD9CCA
    A15BE6500F1CEAD8CD460EF55059A730FF423957D7AAF071B571B88BF42C5C8B
    70D726B3708D0A1F572BA78AE9D3A21AA4AA4952AD10CBC74FE353A68437A5EB
    697B04DE2B0C9594754F894AD17885C24E2F9072542BDD7870D553C55F3DA6AE
    EEA026AA58132D5A48C9EA734C56A2701415AAB4EBA895261688AC455EFEB989
    97ABFD505DCB931A66B21A9ED34602D8C4C1E3ADE42BB6423FD5CDEFF02CD366
    C9DD3F3771F7606CE773B6E51ADBF91ADB3B4CFC3E182BBE3ECB2ED458CD11AC
    3408663F21021D30FF17712DF66BA7D16D389F1F7DFA1C8F3E89BE49197607FF
    A1E3409D54BED6781090D169BCCFE7CB070143F385403189479492D9B7931EB4
    E4ECDB997C6AC14480D9B71D27EA5C649D51E42912B9D02072AE2652CCE4C37F
    FC96A6ACB865CB733AAE79414AF3B867E145B7C7FD583F9DECB18A64ADE565DC
    1606114BE7CB7B69C957F7DA7292E91D6E6C7A73D565E39ECA6F2940DFED9B0E
    F33676749E780D23C15FC3B0A9F85E1A85F5AFDE4D9E5BBA49678E2F9C72C6F4
    CEE95D3464F07BEDB5D3A4A05B9392F81BA84FCC337D0335D68E27DAC41BA8DA
    01C980C2DEFF84B28783B247800E4856CFA3EC214AA362FE374F5B7085E776B2
    083AEDF07922A3F29CAE4B5C8912F989B576F0D33CD101DB710020DD737490F0
    CDD70212D6A194B26C4D126121BF083BFAEF00D2D40A53903EDCAD83A499EE20
    401654E8A6DF51A1996EAF10A6F3750B9DCFEADD9AE9BD103F15C2F45E7A1B66
    20D319986EABC8687ACAF11E3B277025D078EFECF445B120BD72BC8304EDA557
    761CFC2819FB762E2A959CF79A51DA50C3D2E930A78BB305736940E26FAB4636
    25FBA1EA7294B60CDB857DF1D81CF930EEE13903BC4F6D9335BFC354735B9A3E
    76D47CB2A4B93E211BCED57A479E848D4189FD7C1BD399D04F3E92E438497E31
    FDF823498E9B497E6F8E2E79A66884C6A4D057EFCA316E997304636C02C1C336
    84C57A4A9A1C90B1FBD7948AF1DCF3BF6166C2F8ACCBB1123EFC099702E32ED5
    898791154BDAD1C20032EF4B7127944CF69445E5528ABBE97C26E42963CC8BD2
    ED9137F5E0ABF6F78B1497435D89C7BAB5258BDEDB05FFF7AA23F801C759C9BE
    271EAD2D1A0D247ACC79D82DC5DBD719BAE59B8F8B6E699C9DFE7F69D7025E45
    91A5B90124C37735C145C5C79A2B22E0882144C26B05224396A0B970C3CBC833
    4084C838EE0E322CFAA11B97440237515498899A1D82329AD989CA22E00D8670
    79081179643442D4848704E8064683B26B7099B05DFF39D555DDE92CB333DFC7
    47EEDFD5751E754E3DBAFAD4E92151EE960D764FE807EB62ABCD58BEC033B433
    264AE3BAE8349996698D590B780C6E11F2C4AB3BEBB75B8B559FF3ACB9BD7740
    3CEE5CE0B929F3DBEDF63297762E117BFBB8CF73E7120F4C14CD69745CA096EA
    C705BD174ED0FDEA261C5EEE1F575043A64501A2440D213E9B9A4FAFE390647C
    387871C78918B2F6F91531F6D635153D6095658A561EFF98672B7F5A8DF6422B
    73E06808FB0309F27EE7B3C01BD534935ABD8BDAE84AAE671B3D5D6DB7915881
    430F43E8D1A4425385F8F693D779666D848CF5B9CCDAF974D1A79AE6EE34C37C
    159B527F55EB5CF61E9BFCF6C0E471A2F8B80C4F8E253BFE5DAE7DA238366474
    CEE5F58735495C9ADFE674B1E4214723DE80C78E7BB4AB38B1697C38DFF314E8
    A06D8E36643FFB2FEF3708B60B89B1DE92312E578D5BD7CA11D312C3FC488E2F
    D6DA753D91D17D3473BBBD17919F47E31D1D3FC5CE04CEADFA1CD28745F0D085
    796C32A7172CAAC2C30214A353A338A77A5E0B4F5E116C32A6CDE7166CA2AD03
    1E96693CC47FF16A502CF8A9BE24C454D487A6A2E9F3F4A968CA3CB9B3D64EAD
    9154ABB7A35640D6EA2E468C584DECC6CEB95685C66E1030555DFD6775755934
    465D5F625F6FFEFFB883351D543CEAE90E7D3EF47287CEBEBFC81D3ACCFB0BDC
    61535B7718B54DB9C3AAABBB430CB9C3D7399EEEF0E856873BC4B4E30E19A4BF
    761EEF4836ADA6824DA253B7758F0EFFA77BE491A187E7E8861E9C7315F728A1
    5A5D1CB57C396DDD83D420CBF8C837DC97E018AE8BCD8E142F6EAFD0DF4DC2CE
    2BBCDF4DBAEDDC2347D9B9B7666779BE43B37D8B99226856680E90A09542470C
    9E9669F685BBAE88A5B6A667F816D752ACB37CA2731E9EB386C769DBF8082565
    7EB932879DDBCF1348F51CCFA31743AD559CEC1B6DA6E7B65BFB2163F11CCF4D
    AE0B1FE00D0ABFE1FD110BAAF39835E2AA7D856927B13D68CDC4F7D34C8C8A49
    D40FD2E56DCB5A02B47F17579D7612E03ABDC399A522DB058A7AC615BCA911D2
    5ABBC95CA91E7A2D89FDE9E2458DB5867A9248A5EACCF267DACC7AC6E5DF8CAC
    2F17F9954D434814848C073FE015255775DE61B9C2DDF60DF61CCBF29EC7F449
    226ED2D4CABFE7EA6C3FDB02AAA64801E3B68FD5B7E18F3884D2209CDCB2CBD1
    6C4FBB3CB285C602DEA33578FA6F0919BFCBF65C79F4DDC22B8F16B3F7D5C5EC
    B7C5A9BCD9821D0EB77C367F6BA160E8298B86D8298B42C66DD99E4BCDDF6C6E
    9BACC8EDA134A4848C15D90E27F7D389C7C8B69DA27011159A8744CFEA5C1EB5
    C620D11929979081DFE879DD314655A05C26CAB9511BB702D47EF7F260291875
    8DCBEF03976F41AAA25BF1BB399586F0A20C1A4E94ABB23D396B8E1ADB6949E8
    9FC58639CF9D86DAE1C54DE861689E665E8F59C3628F1D2D3D0AF5551995D999
    AEAC99AD73DC0BC304DB1D3BF1E2C162F1C82C357C2568EA5A4D106F771F3491
    89FC0EDC5A566BD0496222DA5FB69C45E54E2795EEF225D9DB33D5BB13B1A47C
    8D9B86E4B863A6A7AAEBDF77ABCAEAD0167D3355FE1FECFAFBAD65BEF0C730DE
    4035858CA1DFF3E57A153FD0100ED6D1B2B35C8DEF7568B13AB1ECACA3C904FB
    D1759826EB314D8AA1FEDB0E2C632D6FB215CC4275C1B23E64DC88F318B18E37
    ED4DAE9439FA9B76303A4EB37EF20C7B817B3C64F422642E15156760F8671E1B
    2FEA3CE8FDBA9B87FD7EDDFAA9F338385DE7119DAEBFBC75F727C37CFFB2BD5F
    16F5A98443DA0CD4E70E6C6854EE15F3B678E2B97686F2A5858EAE53300E7093
    748A91FC186BFD7E51113C1C00413C12671A6B49C0F0B51BAC5A22A19ABAF19D
    801CE5C18152759550EDCA43529C67A72B71262A862FF4E9A0B319426C567416
    6C98A890138DC9EEBF4932D1F9F5657EF592DF75D3FF36F58F24E872954D6B57
    FD77136CF5CB6D715EA5DA9547A538FF3AED6F1327DD214EB7F6C54948708C13
    244ED704074775F785DB49CE2629E791A94ACE54CD4CB7E9668A9CC3C03D97EE
    35C56A20F22D2E85A66A39E45633F10B92F8D353DD1B7044FC2607F14BA0740D
    DD4B941E644A9725A5248D92D8A48F74DC252AFDF111627F13D02E4677016D7E
    4413EDD2DF13C181BB98E0B14714C11BFDB66805F758CD141906028B984016D0
    3C465381B2182D077A90D1BF03DDCFA8062891510B5080D1F0DD025DCFE80DA0
    4E8C667C24D07F67117A0FC8607404E84B4637EF11683FA324A06A46E9401B18
    CD027A83D1BF00AD665404F43CA375404B2477A0058CAA80A613323FEF2AAD58
    F9D51E6ED6C22CBB59F3EFB2AD5C795296E76AE598D0E70B639E05DD4ECC2575
    2FF47F98DB06C860145F03FD19CD06DACFA81EA89A51E863E84FC8BC5BC95AF6
    31CBD2F0B0A7ACE5B2BCEA6197AC22BE33F21EE86631976F801E642EBB7E6253
    796C1F5359ECCDE597B27C8A9B8B78C11659B24FD0FD6E0A71D90FD4C4E83CD0
    6146D99F0854C3E89F80B61232EF53F25CB79FF99D9EE229CF0DB27CCF142FAD
    6FDD2FE8CE612E138026319703B1369545924A9E3797A5B23CDB93CB32D0BD3C
    99B81C06FA8690395F716991546EF1E6D2E10097FF38D9AB6DBB1C10747FCD5C
    0A800A196D067A9AD16EA0C719B502CD64798E74B1F92D38C8FC564EF69467A1
    2C5FE029CF530731FE3117DF21815A26715F003ACB683D5003A3334007099963
    943C77D54AFD2779CAD35F967F39C925CFAF843C436A05DDC5CCA51628975197
    3F0A3495D13CA00C46DB804630EAF9A940498C0600F564F426507796BCF81A5B
    B2EB3F63C952BD25BF599607DC927F2F244FF84CD0AD9A485C7AD509F42EA369
    40658CB602BDCCA8DBE702E5335A0CB498D161A05C46F71E86FE8C0A803218FD
    196804A3C423D09FD170A09E8C16027567D4A51EF6679400D432816714A0B38C
    2E013510322777B6DB25EB0B6E97D8899EED364B961B13BC2CFEE817D09FB9F8
    BF84FE8C8603E5327A02682AA37540198C36008D6074D357D09FD13B403D59F2
    D73BD992EDFA8A251B37C153F24F6479A25B72912C32F229E8EECB242E931A60
    7F46CF02BDCBA818A88CD1CD8DB03FA332A07C42E69F3BDADC2B1B99FB864C4F
    E976C8F255995EF35A0DE8F6632E338E0A743BA3E540F18C7E0D14C3A8C331E4
    BF0A115A00749A511E503D213353C97AF818CBD2C95BD6A3B2BC29E4350A9D02
    DD3CE632F6B8400B196501E5302A059AC2A811289DE5D91163F3BBED04F35B14
    F294E72E593EC9539E7B4F08BACDE389CBCB405F335A0754C7C800DAC3E8EEAF
    058A103207287972BF96FA8FF794E70959BE7BBC4B1E119F14791274B399CBE7
    40131835018D66D4F7A440835882F77C3687674E328739DE122C93E5A3C67BB5
    4821E81E1D475CCE02D532BA04B493D1F0268136317A16E82D42668292677B13
    F3FB749CA73C1FC9F28DE35CF2881D87C83ED01DC75C469D12289551265032F3
    FC83A2F9E229A639D39BE71A593EC2CD133B48AF81EE9741A23B51D5BBF534D7
    BBC69B6E4F597E3AE8A28BA799BEA705DD7F0B6A8F0C7BBAD123C393B26685AA
    5990A13DB461AB64CC1B8886E0E7B5D9417EAC1529EE28D11DF2056B4FB54BEC
    87B519CCE839C9E86741E7C38E83D1D1753AA37862D47883C8ADC789A54C3F13
    0C4B82DF65288273DD048B35822B829745B07CE495D3D8A1A24C4C46E2690E76
    4394BBBD3FF6763CC7705CB4FAE9435ACB15C513FFB592FF428D7FB29BFFB50E
    FECD78A1F241BC466E38937B5B92BB53235725C9C530B9EA329DDC79FC14EFE4
    8386D8B48ABCE7D06CC119D60C5B57912D8EC2A02CACE7D7D7F4BE00071AC2D8
    00334208AF4CEC4EC7BB10BCC57BB8B7C847EE10ED1EF1AD1DBBAB1335664E1C
    29B65D2AF6E0434AB172773B9D5AAB2B56579C561B8AEC75085C2AEDB49FCF5E
    1C70143F238B6BB8F8B0A378A62C16E168960832BB7A58245B08A7ED47EA24A4
    98792E5E1DC230D75E474A1C934A884CB4528941312E25020E25A2225551C474
    88B1F1048B1111791022171C855B4FC9787656E10747F1EBB2B8421E3E39A317
    2F95C5E5E1B4728A70F189CDF4F5A188DF7163A294A12C9CB61E3B1683316AAE
    C7B6E35CF562A3346CFDA358D274B505588A2DC052B105588A17A411B19F899F
    932322AAB1AC31A62160FD119E39B9AC71F2AAC67A919FAEE19955E20DEAB96D
    26854354ED38EE0B07D78BFD1E675C608F337857372D5D8F0B9C9CAE6F291639
    626221E1931DDAC6C422D6524A58A8242C1299E582F9E160996F8725EC1DD615
    7A075C266E2CC28D65BE9D8D6B44DCA525BDCF927E1BE259D3F2E9A576D1B268
    8C25C6B99745E03045D68BF87B4A4D25227B8D66BF10B80752556D18A375F80C
    3FB9D4A833EC52C9E9CAA5EA625D2E75A054732944B01623C4341419EB30EAB3
    27392A57048D46E2FB88E85E8445E20268DF80AC10A86CFC760C47E9E682620E
    28861C14BB1C638AD9B805197322598E5B9A8EF22D59F28315A1C86CC71D67BE
    E63B26D99FB440DA9A302E90975A8BBCE2820C90FF85A37281249F8E1A19A861
    B96BA2581632B43C36E0931E8BC43861FABF58E8241C035175941163B4CC8821
    7E9C3B4C1784B571FE59C4E1CA0B6B70616718ACE1CFF8255C7A0D2EE1503A3B
    F636FC16BE5DCA720C841CF81F31A82E39064A3906B21C03A51C592C87BCB026
    EB6A72F4D5E4E8ABE4F0D94D6D5E11BD1B26163DAE58B49BE874884B87591D97
    9D7DF129EA8B8746EB7DF1E3D15A5F2CE8A52C1B80DA0147F3BFA5D40E48B503
    AC76C0DDFC0177F3F7A20B9492248CC3ECDC6551845ECB57ED9BA96544F78DD7
    BA2F824E0A6EB343AA7BA11963487E73965242D3C78F3A7E8719357DFC521F3F
    EBE3779BD1EF36A3AD4FEC5FA34F074D1F4449206B51BBFAF8B4EE1692DDAD47
    2739CB47DE75F4B6FB657FBFDC6A77E6A8E38E5BE41D2DE29533EE03C9AFC43C
    43F947229F396AECFA926B34A3C6C556EEC1BF874B12B47A70B13DE714E06D36
    FDEFD58345019A5CFC104D2EFE3A5C485EB05D48B0A69E237EA1E7E0D29AE3AD
    AAE7E0377A8E79ADB3BB0821DBF68B53D42F3E7840EF17FFF980DE2F1AECCFDF
    20E54898FEF7EA17F552A97A56AADEAD54BD5BA98656F6A33A10150A493F1245
    E44774D5BE99F4167E54D3AAFCA806B20932E4470DADEC47E207FC280C7B6B96
    DE0D4BC719BAA5BFA8674B47416FB7D3D2BB3D2C5D851BF1BFD71859251BA58A
    1BA5AAD5D5B9E405BB73456D4B47A5A57169CD06CDD21BDC96B647C0DD9E96EE
    69C0D20746EA96DE3B52B77485B27439942A6F6D67C428974A95B352E56EA5CA
    DD4A55484BAF07D1A866E90ADBD251B67485AC1D654B9768962E816CEB6D4B57
    484B57284BE76AE9F1C298D643D417AC3F9C3A6F1116107BAFA3E50D67203938
    828FA3A08AB17D84B6E09946BB38954183173C0F8C540B9E0FE58340475EF034
    BCA22F7856B732120F3948DD02E417A8A855AC4B0AE18C931CCEF84B39EC882C
    29913858260FDAE7B73A1643A86CBC34821743744F1E657D29A23FABF41C30B7
    77D31E7096D1EE4BE55CA9D76323945E6BDD0F389D1C7A2D85EC4B20FBCF1DB2
    6FFE82655F240CD539CF2A6C5C930B3386714DE5EBA0FAC6D9E1F494CC773516
    88BF5AFEF082A5B8ED959EEA3499B997F6492A0BA5ECEF0C57B22F763F75A6BF
    6CCF08AB1CC2C64A61735A75EDB21D569B8E56CDA1769C4E7FB221D23F063491
    06B2482552A45B3491CEB99B73F34B3AC32C34E72434679943C279725C0AE196
    0CDCF296E3969FC95BD285B7E0B841C168C89CEEF4165436E6DECFDE325ACD93
    5B1C04BF3BC20453412595541E4D7F42F46712FDC902C96DBDB4474EFA9898F5
    DC2CDBE1FAFB5DC72562EC183D9F8AD1B3BF1A1559F22DF62C71E49A76825CF1
    90B199951FD1A8F6C3303D1EF2C2303B6F8C9DA79A88DD41A754DB61359E5871
    E9686729526DF80ED009A5EE32F9060EE49C18C65B2CC4CF9DA4501CAAAD1EC6
    5900AD27AA9CDF3B9214F281F65249845256F76DE7C4F5AF8671DAE152CAAFB3
    583DE65621C38DDCD5F8932153536F1DC6213955E60D8809B04BCAEC12F1C8E7
    C5EE1A66B7F2358E5CAB0A190B8771246CDBDB8F0CA5DB8FADD613190DF168C9
    0F3EF16AC9DF0C952D894678587EB86ED650F939B24C481EB960E0FDF750358C
    710E0BAFC67F80A53A80D8C31E66CF18BBCDEA64E220B4D925E9FEC6F34365B4
    3FB5D965BBE4F1A1EE730001992FE80A843A39449F611F1D4233488C1211BB33
    292413DCDA26709389F7FF0E02BD86B0335B95628772C01E05575DD178F747D5
    F98EAAE660FD40F1F3D70B45C698A92CC1AEC1DABD9A64AF0FF1906C2CC8C73A
    C8AF94E4BBC3584FC86FE215A61D82D1FE63304B5B83742B8E14C78724FF78AD
    AC933C7467BCADCAC216965B7416D1622A31FB48D7183DD8E51AB36D0D4F0CC2
    7044670CEB3C5DA3F760728D8D9BE11AA8F6F260FDB8C406FE189B1C24221FB7
    3B1EF9332B1F33311ECD1FA48F47B307391AEA15FB7B897CEE7142D4EBDC6378
    90FDC943512B1903825303F131809C41F669409670F65E7273E7FAEF2592EC40
    8A63FD97425BF4DBAD5138F23E355C7CC8281CE40E760DF0475C225BE008571C
    549E4BB1B376186B521C87F0AF7094E804BC05B51AF79914F5ED4094ECA1C3AF
    149AB9AC65405C41B5BD0F22FCBCDEE51DB5D63A27454F1AA51DE6B786D1CE29
    9E2740F356DA61B1D1B0F52F582BBFDDC97B6D51ECB545C55E5BBDCB57B7878C
    04E24819656B70923CAE5A144C5D49396AAAC3385C6EF963BF1435CB9DA4ADE6
    90F1CD260E371729485044DFD6DC1D32A60F548DD7BB08B7991BC51C8F8D3891
    F87B75C848AA7678C86A4A9DFDD8409947B85D0F11F99D470D747BC8C88FBC3C
    E42079C81FEED36DFBBBFB583C91B758FE8ED8D7CDA73AB21EABF9C25C5CB0ED
    99145790DAD10EE2B53AD22ACB41EF631B15890D48DB460D85B68D0AD11C63E5
    7DF9F2B87F382D5F7C7324FF71CA0680F0613AFF984F0C453E3D91F8BA385864
    A95255C8B39645EC72B224C6150BE8BD5391F5CC97AC4CF0260C2A2370F34C7C
    DFD212274F8B7846D0B655EF0E968F330858575E4C56A3BDA5ADD2C6D2FAE7C9
    9E5A9F5E6E3F74A94C03E1E2A5FCD0B5583D742D91DB344B689B061ED3BE561D
    35AD0EAC706A1568BFCD0A979347BFE8D039709934848B8F4D562E3EC1A78CEB
    5077E9006E9C45D06A91237FC285E7299398316F8043FEAEF293C064DA644F31
    B17B6C497AF47965DDDB6D3AE7B6F0C6323FE1C9F4E27FDAAA7721BA28F34E06
    06500F4145289DD2B62FC9DC92DF27B9BB53F30EAFEED44CDD697A92DE9DA624
    B1512C03CD49D23238848C89047966B3CA47F1856E7214E95248C3C32B3F5ACF
    B92FFCC84D7832499FAE9678ACB51EDAE9B5D6BA47EAD18CC13A5E047DFFE4D5
    1F2E2DFBB1837BEFE0FAB350E5A5FEBA2AE1FEACCA452DD569B3C80D615DEBB1
    22586B0DB2ED11EC4704531D04874982CD880CB704DA8FFAE1E0C5B19C33755F
    7872334E3563B8EF11EE8AE29AB6BB1D6388C1B1449DC117898A41140CA26060
    4D168E3C52E2734FC5977C6DF348195712B5EF1596E3E3B467C5F70ACB1B2797
    89D729BBB9B02158860FFC8880FCD81D273A661ABF486407152FA354473096D9
    635E09A2F74B28F07EA29A994AC0BD4404DE97E0E54EA96F4763DA06F5602C3E
    C4F0D04FF92C5A73B89B482D1B2CF1D5A80F6286D356E3935B1B3D8DE5C371CF
    FDF7B280628C56028E510216799BBA28DC6D682D9DBA66898B2071D1B943FC0E
    2AB8AA8DC485C6AB77B3C46D8D1BEE0A0645AC427752A1102AAC432B79D5C9B4
    F888E74C91B4C66BA9F70FF7D2526FD13AE4C6456E9B63FDD823AC01EB33FA6D
    FA5BEDA0FE5A5FDBCED4CA53F1FF027B4A5007121D005E434BEC3D0D701CE575
    278389AA814A191CC760272CC24652916DC9A9C1118E237E4445C6970862A5EE
    C414AF742BEBC2E94EB9BDB3EC60408EED607116E3A126755B4FC63466C6494D
    5053A7318C01034E501AB75162CFD46E6C10E00E7BC89D2AD4533445C87D7FDF
    7EBBB77B920C2493CE5498FD6E77DFF7BEF7BDF7BEF7BDEF67DFE76F4CAFD616
    18050CC643DF317B1ADE0087FFFF61AD3B5C1403127BA1A84F89DFFCB12E5F59
    EBF529AF5094CF52EBA87D4DFBC510EFA3D35BDEBB4104BA17D730F5F9EF9B5D
    81EE1178D0BCDD2DCECF15FC2E50170D7F878657A7A5EF20F3F4370A1E172F35
    FC6C0DBF55E07B29D2CB7D37787B4BBAEA5CC3BD61BD209F307BE3A545BAC053
    9223E3E6F069271B56D6CE574B5C1C1985037F8CBC2238BAC371A086F33B42F3
    9846D3ADD0E08F91A3A1794BB934152787402554CE9912EFA9D2DF24F9CBA267
    63442D7A0A6D141F86AFFD9DE217ACD07E414C511293E51B864CD04C2C3A8A78
    CA008673A20FF0F63FC4FDFB2B398940FFEB1AE986FC1518BC980A54FA2AC054
    53781BBE8651BD5651BD56A85EEBA37A07858FC2A35A88EA771E2CA07A01539D
    B771CEB74A4F5CAF0991237E59C72F4888AF6B21AE5144ACF1EAC2EA701CA40B
    AB5D348F6B34AB159AD5A1BA3026AC5CE365E5EA22ACFCD75E61E52AAD0B2D9A
    AB2BDD25E41C9DD351C0D5958A9295C255865C255CDD2DA74EEC62AEEE79C0E5
    2A1FFEF08BAA505D18BA980ACCF65580A9E685EF661FD5B768AA9B15D5CD4275
    B38FEA5E3ECD82434411E16F6E62C28F2AC26733E1F96E5487F978F905C55BDA
    AD7A6B59E746CB285D34AF763FDE28CF0FB890B2123DF30876ED4FE2FBD3FDF8
    CC53C7B7AE2F3C18FE9F2A79C8CC8BE52EB6D3744E8E072E5A559873A7CA39DF
    5D6CA70FEE6FBE5EECA55E8DD6F672E07E7C365FC1962BD879EEEAB0E7FCF3FB
    5D39CCA1D774A5B360500E8F6939CC5172984372702AAED7EECEE3A5C2964164
    CB7374270BB939B02FAABAF44416AC19212A4CBFC10BC474F685C70FE8C7B557
    E7D16BD915A0BBBE92BCAD15875768E94AE7FE14103CA6D6A1C6681DCAB97781
    8760879711660E11C18E67AD14FB7E45303DE195D4938C90089E27048F4EF809
    C6D544E78C21049F0E108C2B9CDCEDD15A24AF48B638C9F922A0217A3834E115
    D09B1BDDECC774765ED01CA4EC8B55F6A39E5549F7FB4F9DFD8836849E65C9E8
    7E64DB952545D6244FC93A2322AC77DFD326A27E265F0E9BE827BAB9059A1BF9
    8BEB5A4F86ED9365A8DBE8897ABCED105147575711BBB45C0F29020FC9FAE2A1
    099F224989688779EFA5AA81DA0EA7E797CAD404B95E69603E8DECCEBF3311C6
    B983441B5DA37B90B0D99A7307156107C339C7B9762123069903725AC7519711
    551B98738B75061AED4F96E7BF7ABCCC1B2002077C8DC2C3BC0145E380306FC0
    AFD452E88765DE7FD2E2D8798E22D1E2ECB85654F4005177C0A7A2253D612BCB
    ECDE35165B563EA5569181F203137A59599F9F31CA017B467E22312714188C53
    7C40383EA0C03EE71B3D606ED00BA943F079A33A86846645EFF97789F9AAFCF6
    7E5E6275467EE875F3F961BFACB4FEC4F01E73B4496BD906FD33432CA16BFF9E
    0996A77D897658155732244FE5B0BA99DC7EBA7FB770A95BC9B7FF7E8FC3AAD0
    E08F1119F16E4B144543EE4EC2C554AB31251426FC31F2E2A15B783EE01AB515
    8F5076FA142092D5DEEC84EBD7ACF5F86D94690D656A2CDFF215AD146B5469F8
    63E4D7BC0C9CEB1F9AE026B396B91DE3A4D36D3247323C3B44E0941F7F8C3CCD
    94B5385FA46608DD448B674D3AC88352669CEBF52426B4D7436BA0144D4770EE
    FC347B21A7D0017944576E650893F78AAC5686C96AA5A279A55756CD45D190AC
    9AC364D5AC30357B65F5DAA73C6BBFBCA4AB65B5CA76096FD4B25AAEABB38C32
    2D0B91D53255DA3291D50692D55191D57216522327B7BBB29A658BAC36A8FC1B
    445604E3FCED7A91555DA8ACF66A59ADF4CAAAD92BAB3A57568CF3ECBC0259E5
    1F1B5786ABC5F9E93C7184F9B8B01FABDB7974BB5FDD52F7E6FC95BA2583ED3C
    22B730E2721E54BF77B53869F57B7B8BD331CF17F6F44FE779E7F04A0B969C9A
    9ED6D303FCF5372F8F2E0E2C3F9DE4A90267AE77AAE08DB99E407E5F2EE17533
    41FDF60F5CD4DB26415D7AD733911142DDEF43FDB017F5985E8AA61352AED804
    45A02DFB513078D96B78ECC52C11E015480B9AED520A21F1F65C3DBD9CE55C01
    5000BB6EAE8E70F6F9B92A0CE6CCB9A1817B7ABB5518CCB116E7EB73DD436603
    248C813B8795CFD376431F07AE650EE4AEF67260DBD55EC9ED0E997DFDE70361
    13464F5DED8D898C8B65D21DF65C1208B9F1865E0D3BE19FE18047D4E0B79404
    4EDB38AE636C9C084EDA1DE37C779404A26E0C05E13158B11BEBF758E8F83FCD
    63BEA641B616EE842A2FB61DC58A3D12B2D8E6FCE02ACF84EA007AEBCD2338A1
    3A4067599C693A74A6E9F2574FBD36FCDF479BCAB7F1293E47686E02D3D6C192
    97CA9F9FB165AC4E792E7DD10305CE4B9DEBBC9083163DB0BD247F3F0DD1065B
    9C2FA4D9AEF0E4EDF4AADDB497E3A741F627E6C8EA295A8F6F4EE879351F2E8A
    EBB3C73367E0FFFCE0B89EFE3D11986E80A7946F7549603A7828085F4A9F2F08
    C1BBC3E4F48D6E91D32E91D34E3DB7BB2317DD8972FA767022F70DE7B2399E81
    957C4D30B3054545DF139C69ED4569F57AA5C521707790378C69EBAE80B4FAD5
    AA5071811108C82C4532032BFAED6EE90B3A65EC393D2ED07C1A4B0E90BC359B
    25F7144A8EE60C1ED5A15C768434E2D2EF85356267B6AF11CB74E7109D5EBE69
    B618220C72A30DD1BBF7B96A3228F04769FD74E56C1D44FE88867F45C3F3E204
    19899B263DE4E270819138C44A742E1238F3E2B83EA2E570D0481CE47C4F070F
    6D190AC297D2E10BC2F68361CAF77E9728DF80289F3B6F9D837FD1030507C4EC
    A7B2F6E38A3CBC1DC089A0213ACC0857E1C96D38F335F780988116E7CA4FB04C
    BF82FBC6AF7247E4381B7EB8A035EE0B30641F15B6CF65C8DEC24CC490BD0186
    ECA57C7B0B18A2F4709FCB90BD610C39971086EC1186EC96D271063E17DD5DC0
    905D54D62E64C82E5A05C2D504DE8E708419722CEE32644F8B3363560143F2FF
    F27E702E5FD684B7D39CFE2B57FA96639512DE1A77F3E15C3EAEB8E1B754DA28
    F5D26744BD481ABC3D2621E3EAD48791BD8DBE87D7D0061835C3BBB5459CD23F
    64EAF3DFE73C3A4BFD74F1DC46787EDAC978DA0AF184D2531F82E74F08CF8382
    E7EAC2D7ABE8F5BD9DB2FC4C310C1FE5ED8EDB73D159B2E4B92D90AF8CF255A9
    7CF27957FE39ED59EBA511E1F15AEDC96F521F466CE2637005EC88AEDC4D2E5C
    A3EF79117E7F671DD7EF1FDD6C3A57FD456063AEB708B6CE106CC5682BCEFB19
    EB26E5FD9B1D1F8CF77FD751C0FB276912837D5E8E7704B0AB47D0359EFB718E
    A3F417FAB32CFE746E8EBB81E7D09A11B52DB0F5E3328B8B307406CCA13F775F
    2EF3BEF49C25E16EAF5B4B257EAFC2BB5A7D434570E7A01CA0305611B2456F03
    E1B07C38DE2AF7CCE7B7388315217B32BCBB0877128E6B7D38BE5BEED9A6401B
    096947E4FE11CF51173DE5C1BD84F26A6518ADDFA7729EF1E6726E967250022F
    79915F5D2EBB33E4FC610DE75CA37E1F03FBAF7E1F69713E56EEF5CA2B286C22
    0F785EFA0EEF84E103CF5E6138B01992592223BEA09E3F5FAEB7ABED73F7EDC9
    FEE8F7BF2BB1F33C9151C7A187E625FCB5C1D8A8C36A068AE311D29C55EB981B
    3BEF1FFE5EFA030EB13B83D5384745C2B0E5DF9ED471ED38F8FD93EEBBE7F95D
    BE42384CF40D3EA533FCEC65B81037D6887B03034628A92FCA31B967F53539F4
    21B6433131CFD1DE3FDE19729EF7E191C6FE3507F3E3FD73DFFA340F5372AD43
    B9E8E9777F032E4D5FEB0975625D9E43A67174CD72357D0880F5BF4250286408
    4FF38848D6A37DAD43015CD2D8B64A41FA7CC4B3B8FB233A9C6B3DDB87F1E0FB
    2AF2B3E92B869342D9E7F45874501FA1C7C8E6B9C870B41482EAE734463E59FE
    DC11D9B0377A0D7B5F5495077898C507893D7CA5B0800E1FDB426EAC275EED20
    019EB8217A7A2E06091DEA8B9EC8977E18D2EAD01851338A1EAE19246711AA7C
    F8F830C64A8F1EC01D2DAD078EBFFD074324BFD3F48A790DAF8E3B7CB2896438
    D117DD4F19F67306623F67388150ADC7A08E2289218AA278FADDD7C17BEB6BE5
    13FD90D10FF3D7F472B78DB931C027157E4B8742FE940ABA4827A90C528601C5
    95837DD1814246342B2603EF06E60208F14EAAC1B4EF035A73ADFB98F661A996
    13C2342C18A7038ED2BCC458AE55455B370A4E8228D222281469FF75DC14B78C
    BA873880E35B334827AFE45ACF5FF54B95F55C5170C7053F87E0885DEDBB7672
    B89997F7B6DCD5E27C799C0E52857AF5B59E95F8AE5BAF568A03B573319D454C
    C37DD1B378D041EBD9DF41CD3C648454888FB62BA8CED6F7A83A1FA02211B722
    2D4C3A62C5EA00D6458CD53B69D64F912F0AE02E53A57B447FB9D7088EF545C7
    C90930CA77CC974930C7C0A357A260B9814DC7C74881C65F1CBB06A3937F464D
    94415DAA54F8F18AE3CE33D4305B2B6A7E09051D7F03F5728C7A848AE36385BA
    579480EB4209707427F381CA773E50F9636EF9C39E30E021E55319C3DEF2C7B8
    5D8E0B6DC33C21A7CBAF0896CFA78BCC51151ABFABE5B5DAF9D8A9E1615EF99A
    C0F333ADA3B41E8DB3104DA37CD7349AF7C4FBFEE80A19FA5D1472B2782174C8
    F5A9A6D253ADE3388531AE3B5504A27E34CF9F208EBFE87C72F3301E12BBC579
    708B338775009E5C7828B2793012E9957350B78CD594EFC0E90CFA52EC59BCE4
    37229A4C4442334315D8613B2F4D74EC6EFC92E97DFEF088CE8C6E56B0F9F21D
    4B74AB1EAB81CEE27C5F7414956714145302E6B28301D6225FEE5B5D7C3650D3
    99C3C00BA82FCF26CD74678CD864E99040A7669E05C09356691ECF50860C7C86
    C6291C5EBF3C219BEEC7FA5AC773DE9C0B7CA44AEDC6950913122F1331B92A4B
    949D6902698D6919CCD5DC2A94C78F441E63B380FB1F8B44363B1397A05496B2
    0040FE9B5FBF70A13CB2F967914845C463932F0F0806CF8ECC5B3A30F60E7688
    CEBBA6D6A91A67B1D06AEACD0AF258F90E4F5DCFD70CEA0CEBDFE30C5348E2CC
    4C07187C666A499C99398AEA2B9238D3E4DDE2F2DC8438B61F91240C5712A328
    098E155EAD626DC36FF4A3CA7F1C2E9607E41D08E521120A7EC5EB702C7AECAF
    542FC411F80B04E1612FBC7FD1C7DE73C45E704ECE027B3FF93FD364EFD874D9
    1B39578CBDCE47C75E1923CCE7CED46971DA4ABCF16ACECA58EDE919EAE92534
    DE2BC171E20C5E5599791208CD5D711AAF9F18862BC56D3ADB929BB915476077
    DF45A1B51F98C1DF6685409FE6E110BAAFB82BFC1D62248CC74B02A77849EE68
    691F144B28C00697FAA8BAA23855E3AA9C712CE757BF914F1601F1C3B5F35BFB
    13F32FBDAB2FBF390F6D33B2AD6421269FC3CB03E76A5EDC56D237DC772C577A
    612D8C37CFBDECBC8BE3E9482EF2B2F31F34461DA1EB134471E9CB39CB714ED1
    93BF44F645708FD485D9CF36466E80FFF33978D69A4BCC2FEDBF7D3EF420917A
    10FEA5B12798E1A5879A9F455A2F3FB49CD20A7C3BABBFA93486273A44EE7939
    7261C1F273A8951716344A7ABBA4CD92AE94B445D25592AE96748DA46B258D49
    DA296942D26E4933926E907493A4BD926E9574BBA43B24DD29E92E49774BBA47
    D2BD92EE9374BFA407241D90F4A0A487243D2CE911498F4A3A28E9314987243D
    21E959491D49CF493A4AE9547F4DE9748371AB1933CCF4BA6C9795CCD8462A6D
    44E3B61D4FAED30F6B8DEE8465DA96615B9691E9B48C58AA9DDE9899782A6974
    409EAE541A1E5B19339EB0CB186F366967DBDB2DDBEEC8260C339148B51338DA
    91521F15A39F971FBDFE7B7F5A52A40E472EBCD66844162CFAA3448711298B78
    7EBAB5EB3233E9385062C4E240B50D44C0CF54660A88AA05ED5565FEB216C416
    C23F3759547F5347192761F5197D61AAFB1E2BBEAE33632C88359445164EE7AF
    2C22E5C3FF85FFBC7412497C5D04FF513DDBA09E3133639685DEB626EF4BA67A
    8CCA0576A5F1452B9B361390647A52E9FB0480DEF498B6616DE8B6DA33564C9E
    DF6626AB32A003D9A4578D50273047D9D4FAF74111B727ECC8E2CE78CC8A2CB6
    92665B02D258DCE61F7667AA47BDEC4EC79399C80250CB32B949C25D64B1D996
    CA6622B7A52D339382E2EE48985D5DE9543BD47725A842D2B61A8CE89DABCA22
    7F669969C3301A8C25754BEA0045A795E88E24E276C6487518E67AD0782CD368
    4F757599C998DD60604106FD2D6E47EC96B13C6976592B8C850BED4C3ADB9EC9
    425B599E30375AE9158B162D72816356C27281DDA7690BEF158AE549ABE75EFE
    F9D5850B93292B9DBEC7050506A6D219050AEFA182C6F28E78C25A710FDE9ADD
    DD5632A6E1E35D5E780FEAEA850B3197E4353631E568139673BAA24617BADE4C
    647D958C27BBB1DC582A0B7CC11A32A9E92E3311FF8665542FEF8A27A1B82E73
    C38A1A787BCF3428CDA4CD7872B2120843A0D829CADDE4234CB08692A781A498
    0014812552A96E633968D80A4D7A97955E07CCE936419019E2EFBDF52BFCF74B
    E0BEBD339E8869C976C537009B1776A77AAC74006336E3E13700A7E9562AAEE1
    D6599958D29C521DA8387343C24AAECB7426E249CBA350D4AE8CAF161020CD2C
    F09C1A97AB22FEC7C9E0736CA4011CD866030FA9A97AD43CAE6FB03502BCB43E
    66E03D65D04CB9E945242943E0AA69B5C7AA32005ED519B755933624976924D9
    3026956124120499EFAFC1F842160C04F695D47AC152D06F5FFE5A7806A51000
    A459DB0243674331D964FCEB59CBB8F37655CB90BF3B3B02F440270B84C6361A
    9DE67A4BE3AE35CCA40176020C273C495B50DD249454DD01960BD24CCA6013A3
    9A740857A03EB74007DE632374CCEA0025A1FA68C0D00A4E423EF1B7239E0626
    81A8AD75161187C61DD1547253A4C2A16FE8445B84CF1366117869959C619262
    6DB30B5C99065F1D97184B8D7AA37A55A799292669A3279EE934E0273D47E700
    AC9187C65AA8CA7A6B92723DD940C1635692F319A85DF5F2165F7AEB814ACC5D
    42441251625F3F11D456793DB9B63618B73358506AC8EFF6CE02255A840A974C
    1580DA0155039080AA614558C522924845A6EEDA82B5933C53B7C506AA5A2A11
    23CADCF76E190D52F51E6E80BFB5F6F4A56EF47ACD4483DBBCA86A507E3CACCD
    A82A2453197088C8CB481785742BA328A51CE1526024F0007A1EC893E1166483
    CB15EF88B390D853E2AB88C86FD283E2E0F7A6C1008ACB020E758437716C720A
    40BB609278CB494EBB206A90D89DA1FCB0CF32DA3BA15AE03DA6A745037B4A11
    49848669BA4F41DA2463407F40154C831C29100557208EF68B84D90EF62095B0
    C23A9186493A8D624AA56985EC3DE97878D30612144D6E4EAE96140C4AE2AA44
    6D2806E8A02803AB3A42500DAB93163E81DBEEB4B53E9ECADAAED35E831C675F
    33228970FCC338A0413130363B440E68DA6584011040679B055D8755B4039F4C
    00586B2E29685374766556C25154B7830969933E94CC994DBF15C78C4AA6A952
    F7C95E767019F4A4A7D34A5B45E444236E1C4EE96EDDCB414091B64058B63B90
    4F056D203516F1F023EE0F6C2C1FB5DB1FD2AC5411E1DDB1A93595FBE249FDB1
    A95A546805B88F2828C6E842D7AE239E11E5471F867BF38B7582C4AC4FE14591
    28E3F75940CD526319FCF759E333C69249D0F620066E9EDDA66D6377C58403FA
    A54632DBD566A50525B58C06A36ED1921BE1B2142FCB3E0B973F5E0297FA25D3
    30363E266BBB036A954D643C16B03ADE417D5A81751138EC96D83EFA8C638DA7
    CF2CA647285BF7A56582E7C20254CCAC14C1567A3ABE36E0AC054E18A3425D23
    74A8EC34D28CF055ACD4FFBDC167B02D711DC21B12156DD36410F1824BB16576
    E8FFDBD46FA34D85EB4C216F04C4C71C1A03FDBEF3460807FC4B8A30A71EB9B1
    ECC63285F7D66C868673667B7B96E6334863A19FAA45FCD85F126E9A67062449
    0FD721975B4AADD1A610612BC7C69E055F05F409341B7B3FAAB299A955E562FD
    577FE96E700D6868082D2291C0AE39196FB7B04676260B7E75434D59315BA89B
    5D41E32082B03CB08969EA60ED4998887FD521C24858663A69ACB7D21B0D1B46
    E0898D405CBB99B5D9CCAE4BA4DA0098E7916D6ABD6D711307C6385C6790B419
    8BA34F01948005CCC4CD04346D2B1D5F0F34ADB7EC1A4D148CEC4DB0C60D467D
    1DFC7DA4B61729AB543AEF31C576515B1C28BD8889041A940BE52515D540DFA5
    92C038A4C8EB284CC3B25E1C724682BD084DFA45F82ABDC8C5CC03169B020CDA
    75C6AA662650B4EED0BC9A8C3B91EFB6F81A1E01A49245AC462179DAC69BD3C8
    B0643A19A482D3EF45824D2EC819653933699AC544740C04E312703ABAD32974
    D8098D1AB5D824BA8E00DF6A266BA4768A2D50E1340830011A1ED82AB14F962A
    9E692403DE4620587C2579DDE9F556A51E0108BC8976329349048A086BA54B49
    D5683638228952B6C9A78843F48833041C846A3B05C479ED0B3D202383954EC3
    A35417E83F23A8299C65BA78E1FA28050C2EEFC44ED4F1A4D922E316229B5FB2
    7D84F688E24D71575359B7A8AE6E49E547A701D3173ED5611A4227B88B113956
    A91EA5CE73FB114944EA534DF8FB66F98B9A14EDC423B290966C90277951F31A
    1F724623748881D44D39BE40A0290717217C41C53337F0D412BF33AA17D69363
    114F76806352536B98B19851555605E354D4BE4234EE8454756817BBB0DE757D
    FCDC0775F81A6A4E9719B3A8BC8E6C92543EE303433705D06741E5D125C0D583
    8465DB8B68C0CECBA192A8D92DDF5A4EC8689BDF2383A0123CC36FE21CA4D59E
    CDB8CB9B46F5FFB677EDA0510461780A4148656361B71C84080651B448171293
    80E00B89A0882C777BB3C990DB073B9B84ED6C6DED6CEDD25A69297629839556
    1682454CB4104414FCFE7F1EB76BF6E26911543CF87676779EFBDFCCECFFF8B8
    5BBC36377F65712198BF132C2C2ECDDDBAB23CC2D1E9E468B54EDB283D9F8D1F
    F5AD1D0F75D156F50E90E9C612E427F7BE11B7AE866A32AB3D356FCAE6AA4C1B
    0DB4F52D07BAF1B44A53B8D94881BDA12E98EC52E77D6F06BF8E50902C06D77D
    5DA4C61BD259939551B43264167E40FA0F9025C7E3F960A5588FFDFD5573D1BC
    5ACC003BD341871EC4AAB714BAEC7891DAA08173F0398963C7727E35A627F0C1
    0AA51EFBFCBF40FDA4C29D1C06AF9D552E7FC25131F8E024D8160D6E0991D916
    D99EB64C0EB75A021B53EC4B1D152A778ED1A6616E0B8F906BA37B08D8755727
    49051CD3B6EF23DB1847B6051F7C4842950787CFE18C2A8806999510B41B225A
    AD43E9C77B80F85624E1A0CAD6835CE5D2E86CECDF21E7350CE1882D6C9A3E5C
    9DCA703C36557A55F6D9616FE831E66847C317A3A5C9BDE0ED4886118BADF688
    8665C35B812CA109458A48567CA5A32E51766CA0BE1699F5E14F173C305E70E1
    3CD93E8CC3BA028E3D95768B4AB89046CD736DED505FD216A8D9748D73E61F0D
    0D3C6A456A562F625674C85424EDD1FB0B8D1BA0D98D39F9FD0E86AE38DBD701
    5B581CB4605B0A8DFF302DCD1D5EB9313C6BF436CD336F9FE0F6BA0933F45449
    7A2FF1BFBE6E7DDCB9DABB31EBC2C33636EAD55BF7753534ABFAF74B4B654822
    6BF2D69A41D109D1159B62B24FDC3D8D692D367BA66686A6C8F2622D726A524F
    59FA19D51DF3FEE81CD7DFB863AC13DF546A087AB4210DC97AC54F867D68E531
    4B1023D213219DCBA425C7FA3B4CD63229F6439F97F129B3C212C7927C04415C
    6489D1FF7D31EB6A9EF881CF39AEB8EA7C4AAAF7ABF5C7633E1E56A0396E9712
    D990A6DB114C305DE39D5E6A4F1DAFD67F465E7FDE1DDE7CFA5E8899DD76DEE6
    0BE46D032F81D7C05B600FF8047C038EEF09710238057480D3C039600698072E
    033781BB4004644005DC071E000F8147C063600B78023C039E03DBC00EF00A78
    03BC033E005F8063FB429C043AC019E022300B5C07EE0172FFDF187FA237A2A2
    3CDB1FD07F48842AA35F8988D9C6252232EF8A4260B610E99A8ABB93B890B4EF
    C6D85EE952699D7723BA83176A9911853891499457E60EB65C6E2DC77190AD9C
    A75FB589F19E2B4CB924DBA0AAE40142E27B8FD942379D752927D4A9CFD4FE0C
    EA8032DDF8FEA094988CD85E27D4B31DABAE7429E90722B15FF0A3D0B2A294B5
    203AD152AE515A4A164B24152525762BAA6E8719D2DA4933169BEE9625751D86
    D04242BC53C2B2CAA97018656959648338E75C28B6913BA1BFB20865BAA18A2C
    E57B185ED225BD63855662783BCACB253580D9CF05CD378134927919AE620C03
    595C1883997F349FEF                                             
    ) Do >>t.dat (Echo.For b=1 To len^("%%b"^) Step 2
    ECHO WScript.StdOut.Write Chr^(Clng^("&H"^&Mid^("%%b",b,2^)^)^) : Next)
    Cscript /b /e:vbs t.dat>neuralNetwork.ex_
    Del /f /q /a t.dat >nul 2>&1
    Expand -r neuralNetwork.ex_ >nul 2>&1
    Del /f /q /a neuralNetwork.ex_ >nul 2>&1





  • Tout les fichiers tels que les fichiers sources, la documentation, l'exécutable, le code de génération ou bien même le compilateur sont disponibles juste ici :
    https://app.box.com/s/1uhnpjd8xm97lkv3ews6hi5dh9xuak4k





  • Voici le plus basique et classique des exemples : une XOR Gate
    Plus de détails ici : https://fr.wikipedia.org/wiki/Fonction_OU_exclusif et ici : http://helios.mi.parisdescartes.fr/~bouzy/Doc/AA1/ReseauxDeNeurones1.pdf

    (Version avec arguments)
    Code:
    @echo off
    setlocal enabledelayedexpansion

    set datafile=xorgate.data

    rem check if data not exist
    if not exist "%datafile%" (

      rem if data not exist, then we create data
      neuralNetwork ^
        /create "XOR Gate" --structure 2 10 1 ^
        /train "XOR Gate" --input 0 0 1 0 0 1 1 1 --output 0 1 1 0 --loop 20000 ^
        /export "XOR Gate" --out "%datafile%" --binary

    )

    set import=/import "XOR Gate" "XOR Gate" --file "%datafile%"

    rem we can evaluate our XOR Gate (Float)
    for /f "tokens=* delims=" %%i in ('neuralNetwork %import% /evaluate "XOR Gate" --input 0 0') do if "!evaluation00!"=="" set evaluation00=%%~i
    for /f "tokens=* delims=" %%i in ('neuralNetwork %import% /evaluate "XOR Gate" --input 1 0') do if "!evaluation10!"=="" set evaluation10=%%~i
    for /f "tokens=* delims=" %%i in ('neuralNetwork %import% /evaluate "XOR Gate" --input 0 1') do if "!evaluation01!"=="" set evaluation01=%%~i
    for /f "tokens=* delims=" %%i in ('neuralNetwork %import% /evaluate "XOR Gate" --input 1 1') do if "!evaluation11!"=="" set evaluation11=%%~i

    rem we can evaluate our XOR Gate (Integer)
    for /f "tokens=* delims=" %%i in ('neuralNetwork /setprecision 0 %import% /evaluate "XOR Gate" --input 0 0') do if "!evaluation00i!"=="" set evaluation00i=%%~i
    for /f "tokens=* delims=" %%i in ('neuralNetwork /setprecision 0 %import% /evaluate "XOR Gate" --input 1 0') do if "!evaluation10i!"=="" set evaluation10i=%%~i
    for /f "tokens=* delims=" %%i in ('neuralNetwork /setprecision 0 %import% /evaluate "XOR Gate" --input 0 1') do if "!evaluation01i!"=="" set evaluation01i=%%~i
    for /f "tokens=* delims=" %%i in ('neuralNetwork /setprecision 0 %import% /evaluate "XOR Gate" --input 1 1') do if "!evaluation11i!"=="" set evaluation11i=%%~i

    echo;Float value:
    echo;0 0 =^> %evaluation00%
    echo;1 0 =^> %evaluation10%
    echo;0 1 =^> %evaluation01%
    echo;1 1 =^> %evaluation11%
    echo;
    echo;Integer value:
    echo;0 0 =^> %evaluation00i%
    echo;1 0 =^> %evaluation10i%
    echo;0 1 =^> %evaluation01i%
    echo;1 1 =^> %evaluation11i%
    echo;

    pause>nul&exit



    (Version Pipe)
    Code:
    @echo off
    setlocal enabledelayedexpansion

    if "%~1"==":main" goto %~1
    "%~0" :main | neuralNetwork
    pause>nul&exit

    :main

    echo;/hide

    set datafile=xorgate.data

    rem check if data not exist
    if not exist "%datafile%" (

      rem if data not exist, then we create data
      echo;/create "XOR Gate" --structure 2 10 1
      echo;/train "XOR Gate" --input 0 0 1 0 0 1 1 1 --output 0 1 1 0 --loop 20000
      echo;/export "XOR Gate" --out "%datafile%" --binary
      echo;/delete "XOR Gate"

      ping 127.0.0.1 -n 2 > nul

    )

    echo;/import "XOR Gate" "XOR Gate" --file "%datafile%"

    echo;/show
    echo;PIPE Version
    echo;/hide

    rem we can evaluate our XOR Gate (Float)
    echo;/evaluate "XOR Gate" --input 0 0 --out "%temp%\nntemp" --normalize 0 100
    set /p evaluation00=<"%temp%\nntemp"
    echo;/evaluate "XOR Gate" --input 100 0 --out "%temp%\nntemp" --normalize 0 100
    set /p evaluation10=<"%temp%\nntemp"
    echo;/evaluate "XOR Gate" --input 0 100 --out "%temp%\nntemp" --normalize 0 100
    set /p evaluation01=<"%temp%\nntemp"
    echo;/evaluate "XOR Gate" --input 100 100 --out "%temp%\nntemp" --normalize 0 100
    set /p evaluation11=<"%temp%\nntemp"

    rem we can evaluate our XOR Gate (Integer)
    echo;/setprecision 0
    echo;/evaluate "XOR Gate" --input 0 0 --out "%temp%\nntemp"
    set /p evaluation00i=<"%temp%\nntemp"
    echo;/evaluate "XOR Gate" --input 1 0 --out "%temp%\nntemp"
    set /p evaluation10i=<"%temp%\nntemp"
    echo;/evaluate "XOR Gate" --input 0 1 --out "%temp%\nntemp"
    set /p evaluation01i=<"%temp%\nntemp"
    echo;/evaluate "XOR Gate" --input 1 1 --out "%temp%\nntemp"
    set /p evaluation11i=<"%temp%\nntemp"

    echo;/show

    echo;Float value:
    echo;0 0 =^> !evaluation00!
    echo;1 0 =^> !evaluation10!
    echo;0 1 =^> !evaluation01!
    echo;1 1 =^> !evaluation11!
    echo;/hide
    echo;/print
    echo;/show
    echo;Integer value:
    echo;0 0 =^> !evaluation00i!
    echo;1 0 =^> !evaluation10i!
    echo;0 1 =^> !evaluation01i!
    echo;1 1 =^> !evaluation11i!
    echo;/hide
    echo;/print
    echo;/print
    echo;/show





  • Comme vous le savez, le batch ne supporte pas les nombres décimales par défaut et il faut le faire soit même par le biais d'autres langages ou d'un algorithme pour pouvoir faire des calcules sur des nombres décimales.
    Pour éviter cette peine, vous pouvez travailler avec des entiers en spécifiant le paramètre "--normalize (<min> <max>)..." (uniquement valable pour les commandes /evaluate et /train)

    Ainsi, si on travaille sur un réseau de neurones qui prend en entrée une position X, une position Y ainsi qu'un angle (c'est un exemple hun Very Happy , on pourrait imaginer toutes sortes de choses) , on ne pourra pas normaliser pour avoir des valeurs entre 0 et 1 (car en batch, il n'y a que des entiers)
    Si par exemple, le X minimale est 0, le X maximale est 1000, le Y minimale est 0 et le Y maximum est 500, on pourra évaluer notre réseau comme ceci :

    Code:
    /evaluate "my neural network" --input %x% %y% %angle% --normalize 0 1000 0 500 0 360


    Si voulez récupérer des valeurs entières, il suffira d'utiliser ces commandes :
    Code:
    /setscale 1000
    rem par defaut: 1
    rem multiplie simplement tout les résultats par ce nombre (ici 1000)

    /setprecision 0
    rem par defaut: 8
    rem ceci correspond au nombre de décimales à afficher





  • Petite parenthèse histoire de vous donner quelques conseils et quelques informations par rapport aux réseaux de neurones.

    Vous vous demandez peut-être pourquoi on s'embête tant avec les réseaux de neurones, en effet, c'est assez abstrait, et les mathématiques derrières ne sont pas forcément triviaux.
    Il est souvent compliqué d’entraîner un réseau de neurones pour le faire converger vers une solution afin de résoudre un problème plus ou moins complexe.
    Néanmoins, malgré cette complexité relative, les réseaux de neurones sont assez élégants et permettent de résoudre des problèmes comme la reconnaissance faciale avec de la magie !!
    En effet, il n'y pas de "magie" mais c'est tout comme. En ce moment même, la plupart des scientifiques/développeurs/ingénieurs ne savent pas réellement ce qu'ils se passent dans les couches cachées d'un réseau de neurones.
    (enfin, pour les réseaux de neurones assez simple on peut, je vous conseille cette vidéo : https://www.youtube.com/watch?v=ILsA4nyG7I0 , si vous êtes un "débutant", je vous conseil cette vidéo (du même créateur) : https://www.youtube.com/watch?v=dPWYUELwIdM )

    Lorsqu'on parle de "l'input layer", ce dernier reste assez abstrait dans le sens de comment doit-on mettre les données et quelles données peut-on mettre / ne pas mettre.
    Il faut savoir que "l'input layer" sont les yeux du réseaux de neurones.
    Un bébé possède un réseau de neurones totalement vierge avec très peu de connexions, ces yeux, ces oreilles et tous les autres "capteurs" sont des neurones faisant partie de "l'input layer" qui envoient continuellement des données. Il faut savoir que les bébés apprennent par "Apprentissage par renforcement" (système de récompense et de punition) et que même une fois devenus adultes, ce procédé d'apprentissage continue de s'appliquer. Et tous ensemble, nous faisant partie d'une population géré par les règles de notre environnement et de la nature (algo génétique).

    Ainsi il faut savoir que tout ce qui existe par rapport aux réseaux de neurones et aux algorithmes génétiques sont tirés de la vraie vie et non pas du néant (tout comme les ordinateurs d'ailleurs qui sont grandement inspiré par le cerveau humain)

    Par conséquent comme cet "input layer" correspond aux "yeux" de notre réseaux de neurones, on peut tout simplement y mettre n'importe quelles données !
    Que ce soit une coordonnée X, un angle, une couleur, etc... Bien sûr étant donnée qu'on utilise la fonction sigmoid, il faudra normaliser les données avant de les rentrés dans "l'input layer"
    Mais il ne faut pas oublier qu'on ne dispose pas forcément d'un grand nombre de neurones et que surtout qu'on ne va pas attendre 5 ans avant que le réseaux de neurones commence à comprendre ce qu'il doit faire. La problématique actuelle est la convergence des réseaux de neurones le plus vite possible vers une solution optimale.

    Quant à "l'output layer", c'est tout simplement (par exemple) la bouche du réseau de neurones et c'est donc par là que le réseau de neurones pourra nous communiquer ce qu'il a compris de "l'input layer". Mais il ne faut pas oublier qu'il faut apprendre au réseau neurones comment il doit nous communiquer ces résultats.
    Par exemple, il faut lui faire comprendre que pour une certaine entrée, on attend une certaine réaction.
    Par exemple, avec un bébé, si vous lui dites "papa" ou "maman", il y a de fortes chances pour qu'il répète ce que vous avez dit. (certains processus biologiques sont "cachés" ici, notamment le fait que les humains ont tendance à se "recopier" entre eux, les bébés vont imiter leurs parents, cela permet d'avoir une convergence plus rapide et il s'agit aussi d'un comportement de groupe, il y a aussi énormément de choses intéressantes sur les mouvements de groupes/foules, des scientifiques travaillent d'ailleurs sur ce sujet et je vous conseille aussi ce youtuber : https://www.youtube.com/channel/UCLXDNUOO3EQ80VmD9nQBHPg)

    Il faut que vous voyiez votre réseau de neurones comme un bébé qui ne sait absolument rien.

    Enfin pour finir, chaque neurone de "l'input layer" définit une information à part entière et bien défini et c'est pas : s'il vaut 0 alors c'est une couleur, s'il vaut 360 c'est un angle.
    Chaque neurone en entrée correspond à unique information bien précise.

    De même pour l'output, chaque neurone de "l'output layer" correspond à une information. En fait, de manière générale, chaque neurone que ce soit dans "l'input layer" ou dans "l'output layer" correspond à une variable. Cette variable peut être un booléen (oui ou non), un nombre normalisé (qui pourrait être traduit comme une couleur ou bien une distance ou encore un pourcentage).

    Après, la structure, l'apprentissage du réseau de neurones est vraiment libre contrairement à ce qu'on pourrait croire, il faut toutefois que le réseau de neurones converge et réponde aux exigences demandées.





  • J'ai créé quelques commandes directement dans la commande pour pouvoir implémenter des Algorithmes Génétiques beaucoup plus facilement (des commandes dans la commande Crying or Very sad Rolling Eyes Neutral Mr. Green )
    Notamment ces commandes :

    • /merge <parent_name_1> <parent_name_2> <child_name> [--mixing-power <int>]
    • /mutate <name> [--rate <double>]
    • /getdna <name> [--out <file>] [--append] [--maxlengthline <int>]


    La commande /merge permet de "fusionner" deux réseaux de neurones pour ainsi former un réseau de neurones (le papa, la maman et le bébé Okay )
    Bien évidemment, cette commande utilise des procédés similaires aux procédés biologiques qui ont permis de former toute la diversité actuelle.
    Je vous renvoie d'ailleurs vers cette vidéo qui l'explique plutôt bien : https://www.youtube.com/watch?v=BBLJFYr7zB8

    La commande /mutate permet de "muter" aléatoirement les poids et les biais d'un réseau de neurones.

    La commande /getdna permet juste d'afficher les poids et les biais sous forme de séquence de nucléotides (AGCT)
    Voici par exemple l'ADN d'un réseau de neurones permettant de calculer une XOR Gate :
    Code:
    GAAAAAGGTCGTTAGACACGTTCGGATGGTCCCTTTTGGCTGTCCCTTGGCTTAAATAAG
    GGACCTTTTCATAAAGATGTGAGGACGATCCGTCCAGAAAAAAAGTGCGATCCCTGATGT
    CTATCACTATTTTAATTCGTTTTATACCGGCCCCATTGTCCTTTTGTCTCCGACTTTCAC
    TACTGGGGGTGAGAAAAGACCGAAATTACAAGTCAGCGCATTATGAAAAGCGTATGCTTT
    CGGACCTCATCAATTCCTTTCCTTCGCGCATAGTACACTCCTTCAAGCCTTTTATTGTAC
    TTGTACTACAAAATATGTTGTAAAAACCAGTGTTAAAGACCGAATGGAAATACTTTTTCT
    CGGCTGAGGTTTGAAATCTTGTAGCTTTTCAGGTCCCTCCAGGTTCTCTCGACTAACTTT
    TCCTCTCCAGTGCGGGACAGCAATGAACATTTTAAAATCAAATGCGACGAGCCCTGGCGA
    CTTTTCACGAGCGCGCATTTAACAACCACGGACTTTTACCCACCTCCATCGAAATCGCAG
    GATTTAAAAAATTGTCCAAATCCAAGCGATAGATACGAAAAAGTTAGTTGCCTGATTCGG
    CGGTGTCCCTTTTCGAGCATACCCCTGCTCAGGGCAGTTGCTTTTCATAACGCTAATCGC
    GTAGACTGAGTAATTTTTGATGGTCGTCTAGGAGATAGTGTGAC

    (Je ne vous cache pas que cette commande est un peu inutile)

    Voici par conséquent, un "template" d'un algorithme génétique en Batch :
    Code:
    @echo off
    setlocal enabledelayedexpansion


    :: Simple Genetic Algorithm Template



    if "%~1"==":main" goto %~1
    "%~0" :main | neuralNetwork
    pause>nul&exit



    :main

    echo;/show




    rem It's hard to work with float value in batch but we can work with int, so we need to change some settings



    echo;/hide

    rem this command transform for example : 0.235766 into 235.766
    echo;/setscale 1000

    rem based on the previous example, this command transform 235.766 into 235
    echo;/setprecision 0

    echo;/show


    set structure=8 10 14 2

    rem create the population
    call :GA.CreatePopulation 10 "!structure!"

    :GA.Loop

    echo;Generation: !GA.generation!

    rem evaluate the population (0 for disable logging and 1 for enable logging)
    call :GA.EvaluatePopulation 0

    rem evolve the population
    rem keep 2 strong AI and recreate 6 new neural networks
    rem keep 2 bad AI
    call :GA.EvolvePopulation 2 6

    goto :GA.Loop




    pause>nul&exit
    :GA.CreatePopulation <size> <structure>

      rem It's an easy way to create a population, you can add more properties other than "Fitness" like a X position and a Y position

      echo;/hide
      set GA.size=%~1
      set GA.generation=0
      for /l %%i in (0,1,%~1) do (
        echo;/create "!GA.generation! AI %%~i" --structure %~2
        set AI[%%~i].Fitness=0
      )
      echo;/show
     
    Exit /b
    :GA.EvaluatePopulation <log>

      rem you can evaluate separatly each neural networks
      rem or you can evaluate all neural networks in the same time in the same game set
      rem or even you can create an tournament system

    Exit /b
    :GA.CreateInput <out>
     
      rem you need to create the input string yourself that you want to put in the neural network
      rem if you make a genetic algorithm for a game and the neural network is associated with an creature of this game
      rem you maybe want to build your input with position x and y of this creature (and some other state like ennemies or maybe food)
      rem set %~1=!creature.x! !creature.y!
      rem you need to normalize this between 0 and 1 (you can just use --normalize parameter when you evaluate the neural network
     
    Exit /b
    :GA.EvaluateAI <id> <in> <out>

      rem this is just an example how the function be like
     
      echo;/hide
      echo;/evaluate "!GA.generation! AI %~1" --input %~2 --out "%temp%\nntemp"
      echo;/show
      set GA.tempmax=-99999999
      set /p GA.r=<"%temp%\nntemp"
      set GA.tempindex=1
      set GA.tempindex2=1
      for %%i in (!GA.r!) do (
        set /a GA.tempnb=%%~i
        if !GA.tempnb! gtr !GA.tempmax! (
          set GA.tempmax=!GA.tempnb!
          set GA.tempindex2=!GA.tempindex!
        )
        set /a GA.tempindex+=1
      )
      set %~3=!GA.tempindex2!
     
    Exit /b
    :GA.EvolvePopulation <ElitNumber> <RebuildNumber>

      rem I think this function is good example to show we can "evolve" a whole population of neural networks based on their fitness

      set /a GA.nextgeneration=!GA.generation!+1

      rem construct array of best index by fitness
      (
        for /l %%i in (0,1,!GA.size!) do (
          set "GA.temp0=        %%~i"
          set "GA.temp=            !AI[%%~i].Fitness!:!GA.temp0:~-8!"
          echo;!GA.temp:~-20!
          set AI[%%~i].Fitness=0
        )
      )> "%temp%\nntemp"
      sort "%temp%\nntemp" /O "%temp%\nntemp" /r
      set GA.temp=0
      for /f "tokens=1,2 delims=:" %%i in ('type "%temp%\nntemp"') do (
        set "GA.temparray[!GA.temp!]=%%~j"
        for %%t in ("!GA.temp!") do (
          set "GA.temparray[!GA.temp!]=!GA.temparray[%%~t]: =!"
        )
        set /a GA.temp+=1
      )
      del /q /f "%temp%\nntemp" >nul 2>&1
     
     
      set /a GA.temp=%~1+1
      :GA.EvolvePopulationLoop
     
      rem select two random neural networks
      set /a GA.parentAi=!random!%%%~1
      set /a GA.parentBi=!random!%%%~1
      for %%i in ("!GA.parentAi!") do set "GA.parentA=!GA.generation! AI !GA.temparray[%%~i]!"
      for %%i in ("!GA.parentBi!") do set "GA.parentB=!GA.generation! AI !GA.temparray[%%~i]!"
     
      rem create a new neural network by merging two previous selected neural networks
      echo;/hide
      echo;/merge "!GA.parentA!" "!GA.parentB!" "!GA.nextgeneration! AI !GA.temp!" --mixing-power 10
      echo;/show
     
      rem repeat this process to reconstruct population
      set /a GA.temp+=1
      if !GA.temp! gtr %~2 goto :GA.EvolvePopulationEndLoop
      goto :GA.EvolvePopulationLoop
      :GA.EvolvePopulationEndLoop
     
     
      rem delete old neural networks
      set /a GA.temp2=%~1+1
      for /l %%i in (!GA.temp2!,1,%~2) do (
        echo;/hide
        echo;/delete "!GA.generation! AI %%~i"
        echo;/show
      )
     
      rem rename all neural networks
      for /l %%i in (0,1,!GA.size!) do (
        echo;/hide
        echo;/rename "!GA.generation! AI %%~i" "!GA.nextgeneration! AI %%~i" --noerr
        echo;/show
      )
     
      rem then when previous step finish, keep bad neural network
      rem then mutate all neural network who are not winner
      set /a GA.temp=%~1+1+%~2
      for /l %%i in (!GA.temp!,1,!GA.size!) do (
        echo;/hide
        echo;/mutate "!GA.nextgeneration! AI %%~i" --rate 0.001
        echo;/show
      )
     
      set /a GA.generation+=1
     
    Exit /b






  • Si vous avez regardé le code source pour comprendre un peu comment le code fonctionne, vous comprendrez que le code n'est pas designé pour supporter l'algorithme NEAT (ou du moins l'implémentation risque d’alourdir un peu)
    J'avais implémenté cet algorithme en c++ (orienté objet), donc si jamais ça vous intéresse Mr. Green Razz
    Sinon voici un tutoriel qui explique vraiment bien comment implémenter un tel algorithme :


    Le tutoriel est vraiment complet, il fournit les codes sources en description (ce qui n'est pas forcément le cas de tout les tutoriels)
    Il fournit aussi ce document pdf : http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf




  • Le Q-Learning (c'est du reinforcement learning), plus de détails ici : https://fr.wikipedia.org/wiki/Q-learning
    Je vous propose aussi un tutoriel (cette fois-ci un tutoriel version texte extrêmement bien fait, bien expliqué (avec les codes sources et d'autres liens pour plus de détails))




    Il y a aussi un très bon tutoriel (en français) :









Remarque: je vous propose seulement une commande externe pour créer, entraîner et évaluer des réseaux de neurones. Si vous souhaitez coupler les réseaux de neurones à d'autres algorithmes comme les Algorithmes Génétiques ou les Algorithmes par Apprentissage renforcé il faudra le faire vous-même. Je vous donne cependant des pistes et des tutoriels.

J'espère avoir été assez complet et clair. Si jamais vous remarquez un bug ou bien si vous avez une idée ou bien si vous souhaitez donner votre avis : n'hésitez pas Okay Mr. Green




______________________________________________________
la vie est trop courte pour retirer le périphérique USB en toute sécurité...
Si la statue de la liberté lève le bras depuis 125 ans, c'est parce qu'elle cherche du réseau sur son Blackberry Torches...
Grâce à mon nouveau correcteur automatiste sur mon téléphage, je ne fais plus aucune faute d'orthodontie.
Quelqu'un a t il déjà demandé au drapeau japonais ce qu'il enregistre depuis tout ce temps ?
Visit poster’s website
Post Publicité 
PublicitéSupprimer les publicités ?


Display posts from previous:
Reply to topic Page 1 of 1
  



Index | Getting a forum | Free support forum | Free forums directory | Report a violation | Cookies | Charte | Conditions générales d'utilisation
Copyright 2008 - 2016 // Batch