Neural Networks Exposed: Beautiful Genetic Algorithms in C++

Genetic algorithm generations help neural networks learn

This week, we are generating feed-forward neural networks which learn through a genetic algorithm. These networks are much easier to understand than their back-propagation counterparts. Please join me on this step-by-step guide for simplistic neural networks as they learn through generations of selection.

Neural Networks

Neural networks can be a difficult subject to dive into during ground-up implementation. Here, I will explain a simple ‘feed-forward’ network, as we do not need back-propagation for this article. Fortunately, back-propagation is the only part of a neural net which requires deep knowledge of calculus.

So, if you fear feeling dumb like me, fret not! Multiplication is all we need!

I chose evolution because it requires less math 😉

Not Charles Darwin

Feed-Forward Neural Networks

A feed-forward neural network only feeds data in one direction across the network.

Showing how neural networks work

The above diagram shows a neural network with 2 inputs, 4 hidden layer neurons, and 2 output neurons. The data flows from left to right. At each step, the data is modified in a parallel fashion by each neuron.

Showing flow of data in neural networks

A neuron simply takes in all of the given inputs, multiplies those inputs by a paired weight stored by the neuron, and sums these products together. The sum of that addition is passed through a sigmoid function, which in turn is the neuron’s output.

Looking back at our neural network diagram, the hidden layer neurons each have 2 inputs. So, each neuron will store 2 weights. Therefore, the hidden layer stores 8 weights in total.

Neural Networks and Genetic Algorithms

The basic rule which governs my algorithm is:

At the start of each generation, select the highest performing agents from the previous generation and combine their neural networks to form the next generation’s agents.

The important factors are 1) how you select the top performers, and 2) how the next generation is stitched together from the previous winners.

Code Time

The code base for this project has a lot of additional functionality that is necessary to facilitate agents’ learning. However, to keep this post short and sweet, I am going to focus on the key areas that make this process work.

The Problem to Solve

The problem which our agents must solve is simple. They must get as close to the goal as possible, given the number of ‘ticks’ in a generation.

Overview

A ‘tick’ is a time step allotted to each agent in which they can choose which direction to move and how far that move will be.

When all ticks have been counted, the program orders the agents by least to greatest distance from the goal. Next, it selects a few agents from the top (closest to goal) to seed our next generation. This process continues until we reach the last configured generation.

Main Loop

    for (int generation = 0; generation < NUM_GEN; generation++)
    {
        // Setup our generations Agents
        if (!closest)
		// Setup our initial generation of agents
        else ...
		// Stitch our next set of agents from those that came closest from the previous generation

        // Run our simulation
        run_sim(*agents, goal);

        // Rank our agents and take the configured number of top performers
        closest = get_closest_agents(*agents, goal, generation, NUM_AGENTS_SELECTED_EACH_GENERATION);

        if (PRINT_GENERATION_PERFORMANCE)
            std::cout << closest->at(0)->distance << "," << mutation_chance << std::endl;
    }

Our main loop is simple:

  1. Set up the generation’s agents.
  2. Run the simulation.
  3. Rank the agents by final distance to the goal.

setting up the neural networks

The first generation is a set of agents. Each agent has a neural network that is initialized with random weight values and contains the following elements:

  • 2 inputs (distance to goal in x- and y-directions)
  • 10 hidden neurons
  • 4 outputs

Two outputs represent the x- and y-distances that the agent wants to move, each between 0-1. The other two outputs dictate whether the direction of movement is positive or negative.

The following code implements agent creation:

for (int agent_i = 0; agent_i < NUM_AGENTS_PER_GEN; agent_i++)
    {

        // Check if we are basing our agents on anything
        if (based_on)
        {
            int choice1 = get_rand_int(0, based_on->size() - 1), choice2 = get_rand_int(0, based_on->size() - 1);
            while (choice2 == choice1)
                choice2 = get_rand_int(0, based_on->size() - 1);
            // Using default mutation delta
            agents->push_back(new Agent(new Position(start_pos->x, start_pos->y), based_on->at(choice1)->agent, based_on->at(choice2)->agent, mt, mutation_chance));
        }
        else
        {
            // Create agents with two sensors, distance from goal x and y
            agents->push_back(new Agent(new Position(start_pos->x, start_pos->y), 2));
        }
    }

Simulation Run

The simulation runs for the number of ticks configured. At each tick, it “asks” the agent for a desired distance and direction, based on the agent’s current distance from the goal.

void run_sim(std::vector<Agent *> agents, Position *goal)
{
    for (int tick = 0; tick < NUM_TICKS_PER_GEN; tick++)
    {
        for (Agent *a : agents)
        {
            // Sense the Agents distance from the goal
            float sensors[2] = {
                (a->positions.back()->x - goal->x) / BOUNDARY_EDGE_LENGTH,
                (a->positions.back()->y - goal->y) / BOUNDARY_EDGE_LENGTH};
            // Ask the Agent what it wants to do
            a->move(sensors);
        }
    }
}

A Few More Specifics

Selecting top performers

It is important to note that the way you select your ‘winners’ greatly determines the behavior of your agents. My program selects winners in the following code:

    std::vector<AgentDistancePair *> *agent_distance_pairs = new std::vector<AgentDistancePair *>;

    for (Agent *a : agents)
        agent_distance_pairs->push_back(new AgentDistancePair(a, get_distance(*(a->positions.back()), *goal)));

The algorithm pairs each agent’s pointer to its respective distance from the goal. This pairing is encapsulated in the ‘AgentDistancePair’ object. Then, the program sorts the vector filled with these pairings using the following function:

bool compare_agent_distance_pair(AgentDistancePair *first, AgentDistancePair *second)
{
    return bool(first->distance < second->distance);
}

Finally, we reduce the number in the list to the number specified by the caller. In other words, how many of the top performers do we want? At the same time, we are being careful to manage our memory properly.

while (agent_distance_pairs->size() > num_to_find)
    {
        delete agent_distance_pairs->back();
        agent_distance_pairs->pop_back();
    }

All together, the whole function looks like this:

std::vector<AgentDistancePair *> *get_closest_agents(std::vector<Agent *> agents, Position *goal, int generation_number, int num_to_find = 2)
{
    std::vector<AgentDistancePair *> *agent_distance_pairs = new std::vector<AgentDistancePair *>;

    for (Agent *a : agents)
        agent_distance_pairs->push_back(new AgentDistancePair(a, get_distance(*(a->positions.back()), *goal)));

    sort(agent_distance_pairs->begin(), agent_distance_pairs->end(), compare_agent_distance_pair);

    if (DRAW_GENERATION_PERFORMANCE && (generation_number % DRAW_EVERY_NTH_GENERATION) == 0 && DRAW_FULL_POPULATION)
        draw_agent_path(agent_distance_pairs, goal, generation_number);

    while (agent_distance_pairs->size() > num_to_find)
    {
        delete agent_distance_pairs->back();
        agent_distance_pairs->pop_back();
    }

    if (DRAW_GENERATION_PERFORMANCE && (generation_number % DRAW_EVERY_NTH_GENERATION) == 0 && !DRAW_FULL_POPULATION)
        draw_agent_path(agent_distance_pairs, goal, generation_number);

    return agent_distance_pairs;
}

Stitching together agents of the past

In a genetic algorithm, the neural networks learn by being stitched with other top performers, sprinkled with a bit of randomness. In order to stitch the weights together, we choose 1 of 3 different methods:

  • EveryOther
  • SingleSplit
  • RandomChoice
EveryOther

‘EveryOther’ method selects each weight of the child agent by alternating evenly between taking parent weights, as shown in the diagram below:

SingleSplit

‘SingleSplit’ method uses a randomly selected weight index as a point where the responsibility for providing weights to the child switches to the other parent. You can see an example of this happening at the ‘Split Line’ in the diagram below:

RandomChoice

‘RandomChoice’ method randomly selects weights from both parents as it builds the child weights. No diagram needed ;).

Conclusion

This is Generation #5 of a ‘SingleSplit’ method simulation. The green dot is the selected agent of the generation and the yellow dot is its goal!

This has been a huge learning experience for me! I will definitely be adding tiny brains to many things in the future.

If you are interested in exploring or cloning this code, you can access it on my Github here. Thank you for reading! I hope you found this helpful and thought-provoking! As always, feel free to leave a comment or suggestion below.

Further reading on my work in simulations is available at these links:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.