Shreeya Garg - Week 16 : Neural Networks




Image Credits: https://francescolelli.info/tutorial/neural-networks-a-collection-of-youtube-videos-for-learning-the-basics/

Human beings learn from observations and our knowledge comes from our memory of past observations and experiences. As Ben Dickson explains, “Your biological neural network [reprocesses] your past experience to deal with a novel situation. For example, we know that an apple is an apple because of past experiences, where a similar-looking fruit was an apple. 

In CSP, we recently learned about Neural Networks, which are a key aspect of Machine Learning. Neural Networks enable machines to learn and develop their own algorithms, rather than relying on a computer programmer. I found it fascinating that this relatively new, complex technology is largely inspired by the basic structure of the human brain and the biological concept of memory. Essentially, Neural Networks are used to train a computer using a large dataset of examples, so that they can recognize trends and develop algorithms. These algorithms can then predict the output of a novel situation based on the program’s memory of past data. 

When the human brain “receive[s] an external stimulus like vision or sound, data travels as electrical signals through a path between neurons” Similarly, in Neural Networks input data travels through many interconnected layers of “neurons”, or nodes. There is an input layer, an output layer, and several hidden layers in between. Nodes of different layers are connected via channels.  

As Simplilearn expalins, all of the inputs(ie: each pixel in an image) are transported to a node in the next layer via a channel, with multiple inputs going into the same node. Each of these channels have a “weight” associated with it, which is a number that the input is multiplied by. The network then sums all the weighted inputs that go into the same node. Another number called the “bias” is added to this weighted sum. This new number is then passed through an activation function, which determines whether or not the neuron is activated by checking if the output is above a threshold. If the neuron is activated, the output of this node then becomes the input for the next layer. This process repeats until the output is determined. This procedure is called forward propagation and is used both to train the network and to predict an outcome.

When training the neural network, backpropagation is another necessary process. While training, the machine knows the correct output, so once it predicts the output, it also determines the error. The error is fed back into the network, and the weights and biases are adjusted accordingly. This cycle of backpropagation and forward propagation is repeated with large training datasets until the error is minimized. At this point, a neural network can assess data that it has not previously seen because it has an accurate algorithm based on its memory of past data. 

These “deep neural networks . . . sometimes match or surpass human performance in specific tasks,” making them extremely useful for applications like facial recognition. Neural Networks help demonstrate the power of memory and the complexity of the human brain. 


Sources:
https://www.youtube.com/watch?v=bfmFfD2RIcg
https://bdtechtalks.com/2020/06/22/direct-fit-artificial-neural-networks/
https://www.marshall.usc.edu/blog/how-do-neural-networks-mimic-human-brain


Comments

  1. Hi Shreeya! I really loved the correlation you drew between neural networks that we have learnt in AP Computer Science Principles with that of the applications in fields like facial recognition with its memory and complexity it holds in its algorithms. Considering the error that is put back into the network, this prediction that is made based on large datasets can explicitly activate the amount of inputs there are in a certain threshold that it is holding. I find it really fascinating to realize how such networks can cause such a large difference in terms of passing the human performance level, exceeding it beyond its normal boundaries. Similar to how sound, vision and electrical signals combine to give the human senses to different actions, neural networks work towards weighing down this sum of bias that is accumulated. Thank you for your blog!

    ReplyDelete
  2. Hello Shreeya! I did not like your connection between neural networks like we learned in AP Computer Science Principles because I do not like AP Computer Science Principles. Nonetheless, it is still a great topic to talk about, and neural networks are an extremely interesting thing to watch out for in the world of Artificial Intelligence. The neural networks can do so many things that a regular computer can do: it can create such a human-like response and learn from its mistakes so well that at one point, I suspect that the Turing Test will eventually fail on a machine. Hopefully we do not see the events of Terminator take place during of lifetime.

    ReplyDelete
  3. Hey Shreeya, I was first introduced to Neural Networks by my friend who's very interested in machine learning and AI. Learning about AI and machine learning through him was very intriguing, and he even had me help him out with data collection and creating the training models that the processes would be based on. Creating these training models and data sets was definitely a grueling and tedious task, but the rewards were well worth the effort. Neural networks have such a unique application because of its complex algorithms and functionality, and have so many potential applications in the future. The ability of these programs to evolve and develop from its previous iterations fascinates me and really begs the question as to how long it will be before a sentient computer program can be created.

    ReplyDelete
  4. Hi Shreeya! I self studied APCSA last year because my mom told me APCSP was useless. Of course, as your blog post shows, CSP is not completely useless as the things learned there can be related to other topics such as memory and the human brain. However, I don't regret not taking CSP as I have heard a multitude of questionable remarks from my friends and I know for a fact that I'm not interested in STEM. I never really thought about how we recognize things because we have seen them before even though there isn't really any other way to describe why things look familiar to us.

    ReplyDelete
  5. Hi Shreeya! Personally, I wished that we would’ve spent more time on neural networks and artificial intelligence overall in CSP over some of the other topics we covered this year. As you mentioned, it had much more real-world applications in the way that it modeled the complexity of the human brain and memory. When compared to some of the other niche topics we covered like 2’s complement or ethics in computing, neural networks seem more interesting overall and would’ve been much more engaging to learn about.

    ReplyDelete
  6. Hi Shreeya, I've been hearing from my friends that everything they learnt in CSP has been useless, but I see that at least some topics might have been interesting! The point that you make about how the way technology is made is inspired by the way the human brain makes is mind blowing really. It's crazy and a little scary to think about how the technology around is a direct reflection of who we are as people. I think that's why there are controversies surrounding the development of artificial intelligence. Because if it truly is a reflection of the human mind, there will be many positive aspects to it, but there will also be many negative aspects as well. Anyways, thank you for your post!

    ReplyDelete
  7. Hi Shreeya, I literally hate AP CSP so much, especially because literally 75% is so irrelevant and it discourages talented programmers from pursuing computer science when you really think about it. However, neural networks are a really cool concept. My friend used neural networks to code a connect four game whereas you play the program and learns different techniques, and I was pretty fascinated by the concept. I never knew that neural networks are really this advanced to the point where high school students can code machine learning programs. I wonder if we can use neural networks to create works that require emotional depth, because that is the one thing people say that AI won't be able to replace.

    ReplyDelete

Post a Comment

Popular posts from this blog

Tanvi Vidyala, Week #9: Language, A Powerful Tool in Human Cognition

Tanvi Vidyala, Week 11: The Power of Nostalgia

Tanvi Vidyala, Week 16: Keeping Track of Memories