PROBABILITY PARADOXES ?

Monty Hall Problem

Let’s consider the following fixed scenario to help us understand the solution:

Just read the following as plain text, do not think about what is written or try to think about the logic (or your logic) of the problem. One quick read, believing whatever I say is True, it will all make sense in the end.

We choose door number 1 (Without loss of generality)  and the treasure could be behind any of the 3 doors:

Scenario 1:    1’    2(T)    3

Scenario 2:    1’    2    3(T)

Scenario 3:    1’(T)    2    3

Step 1: Looking at all the Scenarios, we can see that both in Scenario 1 and in Scenario 2 the host will open door 3 and door 2 respectively, we see that when we switch the door from 1 -> 2 or 1 -> 3 respectively, we will get the Treasure, i.e. in  ⅔ scenarios we are getting our Treasure by simply switching. Hence it is more beneficial for us to switch.

Now, wait wait wait!! How did this make sense? Where was the part where I thought that the probability was 50-50 and not ⅔ 

Scenario 1:    1’    2(T)    3

Scenario 2:    1’    2   3(T)

Scenario 3:    1’(T)    2    3

Step 2: Let me tell you where. See, when you think about the show host opening a door, say door 2, then you conclude the fact that you are in a universe where the treasure is not behind door number 2, so you remove that scenario out of the picture (in this case Scenario 1)

So it comes down to, either you are in Scenario 2 or Scenario 3, and hence the probability of getting the Treasure in door 1 is ½ .

But wait, that makes sense! Right?

Scenario 1:    1’    2(T)    3

Scenario 2:    1’    2    3(T)

Scenario 3:    1’(T)    2    3

Step 3: Wrong! You forget about the part where the game host is obligated to not open the door with the Treasure. When you choose one of the 3 doors, you are choosing ⅓ chance of getting the treasure, with ⅔ chance of the Treasure being in the 2 other doors combined. Think of it as you have to choose between, sticking with choosing 1 door, or sticking with choosing 2 doors, as the game show host will open 1 of the other 2 doors and you can choose to open the second door as well, you are actually opening both the doors that you did not select in the first place, hence by just randomly selecting one of the 3 doors with ⅓ probability and after that choosing to open both the other doors: ⅓ + ⅓ = ⅔, hence you are choosing to take the ⅔ probability option instead of ⅓ of your chosen door. 

Step 4: But wait! You are anyway choosing 2 doors at the end of the game whether you switch or not, right? So by your logic I am anyway getting to open ⅓ + ⅓ = ⅔ probability scenario, which is equal to the Scenario where I switch, hence 50-50. Right?

Scenario 1:    1’    2(T)    3

Scenario 2:    1’    2    3(T)

Scenario 3:    1’(T)    2    3

Wrong! The probability of you choosing a correct door is ⅓, we all know that, but the probability of us choosing the incorrect door is ⅔! So in 2 out of 3 scenarios, we will end up choosing the door which does not have the Treasure (Scenario 1 and 2) and in only 1 out of 3 scenarios we will end up choosing the Treasure (Scenario 3). So in 2 out of 3 scenarios the Treasure is in the unopened door. And in only 1 out of 3 Scenarios the Treasure is in the door that you have chosen (door 1).

So to conclude, your probability of choosing the wrong door is ⅔ and hence the probability of the treasure being in the rest of the 2 doors is ⅔, as the game show host opens one of the 2 doors, and you choose to open the remaining door by switching, you are basically opening both the doors that you did not choose. And in 2 out of 3 scenarios it will have the Treasure behind it as your chances of choosing the right door in the first try is just 1 out of 3.

Bertrand’s Box Problem

The problem is of 3 boxes with Gold (G) and Silver (S) coins.

Box 1:    2 G

Box 2:    2 S

Box 3:    1G    1 S

Problem is simple. If you choose a box and pick out a coin. If the coin is gold. What is the probability that the next coin in the box is also gold?

You have chosen a box which has at least 1 gold, hence your box is not Box 2 as that box has both silver coins. So your box is either Box 1 or Box 3, hence the other coin can be either silver or gold, with probability 50%, right?

Wrong! It is the same problem as our favorite Monty Hall Problem.

Let me explain, let’s name all the coins:

Box 1:    G1    G2

Box 2:    S1    S2

Box 3:    G3    S3

Now we either have Box 1 or Box 3:

Box 1:    G1    G2

Box 3:    G3    S3

Now, we know the coin we have chosen is Gold. So it can be one of G1, G2, G3.

Scenario 1: G1 is chosen. G1 -> G2

Scenario 2: G2 is chosen. G2 -> G1

Scenario 3: G3 is chosen. G3 -> S3

Hence in 2 out of 3 Scenarios we get a gold coin.

Therefore the probability of getting a gold coin after our first coin is gold, is ⅔ and not ½.

Other People’s Opinions

Yesterday I came back to Pune from Manipal after completing(passing) my first year BTech successfully. I was packing my bags to come to my hometown when my mom made a remark about me not wearing my shorts to the town as it’s not common in villages to wear shorts.

I don’t get why people care so much about other people’s opinions? I am going back to meet my family members and relatives whom I haven’t met from almost 4 years and instead of being excited about that, I am expected to worry about what clothes should I or should I not wear just in case some stranger comes in my house and judges me, not on my knowledge, not on my skills, not on my values but on what clothes I wear?

Firstly I don’t value anyone else’s opinion on me except for the only few who are very close to my heart and secondly, I wouldn’t want to earn respect in someone’s eyes who is so shallow to judge me on what I wear anyway.

 

You are just as good as your last at bat

Cricket bat and ball

Life is full of opportunities, and I like to imagine it is as we are the batsman at strike and the balls are the opportunities thrown at us by life. We are able to hit some deliveries elegantly and score decent 3-4 runs. But occasionally there’s one delivery which we know from the time it leaves the bowler’s hand that this is the one you will be able to perfectly convert and grab this opportunity at hand and hit it out of the stadium. It feels awesome and you get such a massive boost of confidence. But a great batsman is known for not letting the success of his previous shot get into his head to affect his next shot. The one who is able to respect the bowler’s talent and give full attention to his next delivery is a mature batsman.

Same is the case in life. People often allow their past successes to get into their head and then mess up the new opportunities at hand. I am not not respecting the tremendous efforts you had put in, countless early mornings and late nights, punting all the leisurely activities just to achieve your goal. I’m happy you did all of that! Kudos to your win! You have every single right to celebrate it to the fullest.

opportunities-in-new-markets

But once you achieve what you wanted – get into that dream product based company, get into the dream tech club (as in my case :P), or successfully get funded for your startup there’s something else way more important waiting for you at your door, knocking silently- New Opportunities. Now the question is what are you gonna do about it?

People often tend to stick to their past achievements a lot. Yeah I know it’s a great feat you got into Google. But so did millions of others. Also I am not letting you down by scaring you with the competition. It’s simply that your past achievements are independent of your future achievements. And so are you failures. At no point in Life you are defined by any singular win or loss. There are lots of non-IITians who have made it big, there are tons of successful people who hadn’t started their actual career till their mid 30s. One failure is not a scarlet letter that you won’t be able to achieve anything extraordinary in your life. This is talked about quite frequently though. Like, “One piece of paper can’t determine my future.” Everyone talks about this. But the other side is quite more than often forgotten which is – your single achievement doesn’t decide your future.

As A. P. J. Abdul Kalam said, “Don’t take rest after your first victory because if you fail in second, more lips are waiting to say that your first victory was just luck.”

APJ

There are lots of changes you have to make after your achievement, adapting to new changes is a tough one. You will probably be required to put in more time working, you will probably have to learn at a huge pace and then cope up with scary huge deadlines. But that is how you grow. You will more often than not feel like giving up. Maybe you feel that you don’t belong here or this is not meant for you, But trust yourself and trust the process. As I like to put it, “Diamonds don’t just take heat and pressure to form. They take TIME.” Deploy patience in life, keep on putting in the effort and you will definitely succeed. Ask for help whenever you can’t figure something out, don’t let your ego resist your growth. It’s okay if you are the least talented person in the entire company. Have the desire to learn and put in the efforts. Make up the lack of talent with hard (sincere) work and dedication.

Just deploy hard work, patience and consistency and you will succeed in life. 🙂

Thanks for reading my blog! Hope I could provide you with some value. If there are any suggestions please please let me know!
Hope you have a wonderful day 🙂

MANAS Taskphase: What did I learn?

Hi! So this blog is a brief idea which will enlighten you about how I got into Project MANAS – the official AI and Robotics club of MIT. It consists of the learnings that I went through during my Task Phase, the 5 months long rigorous selection process which I was able to survive because of the great guidance of the mentors that I received here.
My journey started on 9/9/2018 when I walked into the MIT Automobile Workshop for my first interview. I wouldn’t lie, I really felt comfortable the moment I sat down and had a word with the team as they were very welcoming. The amount of enthusiasm and passion the team had was really captivating. When I got in the task-phase I realized that I found myself a family, people I could look up to in a place like Manipal where I was barely three months old!
PROJECT MANAS, this one is for you!

1.Getting Started

2.Machine Learning

  • Andrew Ng Coursera (Week 1-6):
    This is really an awesome place to get started with Machine Learning and Deep Learning. Professor Andrew explains all the theory especially the Mathematics part in moderate depth (many find it overwhelming, but I think it is crucial to know the logic and Math behind every working formula and algorithm.) Though in some cases he doesn’t go deeper into deriving formulas which require basic Calculus from 10+2, he does explain the parts which are necessary. Deriving the formulas which were left untouched will definitely improve your understanding of the concept. Also, this course requires you to code in MATLAB/Octave for the assignments which according to him is very useful in building concepts, but I personally executed the code in Python which is a very widely used language in ML. Also, I have heard from a few friends that a major portion of the code for the assignments is already there, we just have to code the main formula part. So if you are really interested in studying further topics like Neural Networks, Convolutional Neural Networks, Reinforcement Learning and much more I very highly prefer you implement with Python because for all those topics the frameworks are in Python and you have to switch to it eventually, so it’s better that you learn to implement from the basics instead of directly jumping to advanced topics which you will have a very hard time to figure out the code for.

Here’s the link for the course:
Andrew Ng Machine Learning

0*bgV76bVbGcSOhXOr.jpg

3.Robotic Operating System (ROS)

ROS:
The beginner tutorials will familiarize you with ROS, a meta operating system, used for communication in robots. I worked on Kinetic distro of ROS.
The intermediate level explains how to package the functionalities of ROS in bigger projects. The installation procedure and tutorials are in the following link:
ROS Tutorial

Logo-ROS-Robot-Operating-System1

4.OpenCV (Computer Vision)

  • OpenCV Core Module:
    The Core module of OpenCV contains all of the basic building blocks of the library. It will give you a basic idea of how images are stored and processed. It also teaches how filters work and basic image operations like blending. changing contrast and brightness.
    Following is the link for the module:
    Core Module
  • OpenCV ImgProc Module:
    The imgproc module of OpenCV contains functions for image processing and manipulation. It begins with drawing simple geometry in OpenCV to smoothing images and detecting edges using Canny Edge Detector and Hough Line transform.
    ImgProc Module

expanding-possibilities-computer-vision-with-ai-wallpaper.jpg

5.CS 231n (Convolutional Neural Networks)

Convolutional Neural Networks for Visual Recognition is a course offered by Stanford University and one of the best courses out there to start with convolutional neural networks.
This course is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification.

Resources:
Course Notes
Video Tutorials (Lecture 1-16)

Fig2GCNN1

Apart from all of these courses which we were required to complete in the task-phase, I found many others which will definitely aid you in the learning journey.

1.Python Basics

IBM Coursera Data Science Specialization is a very basic and well-rounded specialization which teaches you all the basics of Python that is required for implementing Andrew Ng Machine Learning assignments. It also builds up your Python Foundations for further Deep Learning courses provided by deeplearning.ai (mentioned further).
If you are short of time(like I was) you could do courses 4,6,7,8 only to get started with implementing the Andrew Ng assignments. This will take you around 4-6 hours to consume the videos.

Following is the link for this course:
IBM Coursera Data Science Specialization

Python-programming-Gamers-and-coders-of-the-world-unite-1024x490

2.Deep Learning and Neural Networks

Coursera Deep Learning Specialization course 1 by deeplearning.ai
This Deep Learning course is the best course I have ever seen! Period! Not only this particular course but the entire 5-course Specialization is amazing and I recommend it 100% to anyone who is serious about Deep Learning. Professor Andrew Ng teaches this course too.
This course is taught entirely in Python and the code syntax is also taught from very basics. You could directly jump onto this course once you’re done with the first 6 weeks of Andrew Ng Machine Learning.
It retouches upon the basics of Neural Networks and gives you a good practice on the Numpy library of Python which is the core library of Machine Learning. It’s okay if you don’t have prior experience with Python, this course will take care of it for you.
You will learn to implement an entire Neural Network from scratch using just Numpy. Complete all the assignments religiously and you will have a very solid conceptual and coding base for creating a Neural Network.

Following is the link for the course:
Neural Networks and Deep Learning

Following are my handwritten notes for this course:
Neural Networks

td-deep-learning.jpg

3.OpenCV (Computer Vision)

While learning OpenCV from its documentation there are some topics which are not explained well.

Following are the links to the best videos I found on YouTube which explains the particular topic in detail:

This is the best Hough line transform video:
Hough line transform theory
Explains Hough line transform using a mathematical example:
Hough line transform solved example

Computerphile is a sister channel of the very famous Numberphile YouTube channel. All their videos are very informative. The videos explain the concepts from very basics and will present the topics in a very interesting way. Must watch all of their computer vision videos!
Some of the videos which are very crucial to start with learning computer vision are as follows:
Computerphile Digital Images
Computerphile Resizing Images
Computerphile Multiple Dimension Images
Computerphile Blurs and Filters
Computerphile Sobel Derivatives
Computerphile Canny Edge Detector

maxresdefault.jpg

4.CNN (Convolutional Neural Networks)

Coursera Deep Learning Specialization course 4 by deeplearning.ai is yet another masterpiece which according to me is better than CS231n as it teaches the very basics of CNNs and solves all the doubts you could still have after doing CS231n.
It is again a very well structured course. Implement all the assignments religiously and you’ll be pretty comfortable with CNNs.

Following is the link for the course:
Convolutional Neural Networks

Thanks for reading my blog! Hope I could provide you with some value. If there are any suggestions or any better courses that you find please please let me know!
Happy (deep) learning! 🙂

MANAS Taskphase: Journey.

MANAS TASK PHASE:
(9/9 – 2/2)
These were the best 5 months of my life. Period!
.
.
I didn’t have any awareness about the clubs and student projects when I came here. Then I met Sankalp Shekhar 😄 who told me about different student projects including Project MANAS .

But I thought that I won’t be able to clear even the interview for the task phase let alone get in one of the best teams.

This was because I saw everyone around me who already had learnt so much about programming and were already way ahead of me. 🤯

I gave the written test of Thrust MIT (5/9) and Mars Rover Manipal(MRM) (6/9) and then luckily got into the interview round (7/9) of Thrust MIT which was a really great experience with a lot to learn from the challenging questions.✨ I got selected for MRM interview too but I overslept and couldn’t make it on time 😂

I got selected in thrust on 8/9 and I was very happy about it. I started visualising my next 4 years in the team but destiny had some other plans for me! 😉
.
.
It started with a walk-in interview on 9th September(9/9) in the Manas workshop.
The amount of energy that was present in all the team members was off the roof! 🤯 That very moment I felt an extreme urge to get into the team!
At least give my best! Shrijit took my interview and it was the best 90 mins in my life… Yet! 😊
The questions were very challenging and really pushed my logical abilities to its limits! I walked out with no expectations but a lot of satisfaction after such an amazing interview. 😄

The results had another week to come so I continued with Thrust MIT. I wasn’t having as much fun there but I was very excited about all the opportunity I was exposed to.

On 16/9 the results were announced at I began looking at the 88 names long list. And I found my name! 😄 I was so happy!

Next day we were called to report to the workshop to inform us of how our task phase is going to proceed. So basically we were assigned many tasks on the trello board and there were deadlines on which day we had to go for a one-on-one interview with our assigned mentors who was Apratim in my case.😄

So the learning began. Started with dual booting with Ubuntu and learning Unix and Git. The first interview for this was on 23rd September.😊

After that, we started with Andrew Ng Coursera. Week 1-3 till 29th September and then interview and then week 4-6 till 6th October and then another interview. This was the best part because I got to know about Machine Learning and Deep Learning. And the concepts were very interesting!😍

Then we moved on to learning ROS and Open CV which took up our October and November. Had 2 interviews to cover these topics up.💙

After that, we had a winter vacation of 1 month in December. But we got winter tasks to complete. It included CS231n course(for convolutional neural networks) of Stanford from YouTube. Figuring out Lane detection algorithm for 2 difficulty levels using OpenCV and ROS. And the third task was implementing CNN to predict movie genres using movie posters.🔥

After that, we had a huge interview on the winter task. And then whoever was selected (22 people) till now were assigned with the final project and their respective mentors. The final project list was 11 projects long and we had to give them a preference from 1 to 11. And the projects were so overwhelming that I had no clue about any of the projects and what technology to use. But somehow managed to fill in the preferences till midnight and got my “Squeeze Me” project the next day.💖

I had Aneesh with me who had the same project. Our mentor was Rahul. So basically we had to implement iterative pruning method to compress an image classifier to reduce the number of parameters and decrease its size. So we had 2 weeks to do that.😬

After that, we had our last interview on 31st January & 1st February. In which a panel of 6 members would question us individually for around 90mins which included interesting twisted questions on everything which we learnt from day 1.

Finally, results came out on 2nd February and 12 of us were selected! 😄

Made a lot of friends met great mentors and learnt a lot! Thank you, everyone, who supported me in my lows. 💖 And believed in me and motivated me when I was self-doubting my abilities.

Congratulations to everyone who got in the team and good luck to everyone who didn’t. I think at the end of the day the learning experience and the interaction with so many awesome people is what matters the most.😄

Really was an awesome learning experience! ✨
Thank you all! 💖

Getting out of your comfort zone.

Any cart doesn’t move without overcoming the initial sliding friction. You’ve to push very hard in the beginning and then everything’s smooth.

Same in the case of life. You’ve to eat shit even when you don’t understand anything and you’re just doing what you don’t like. But you know that eating shit will eventually get you to do what you love. Once the friction is overcame the cart will run smooth and you’ll enjoy the journey.

Make a habit of eating shit for good. The saying “always do what makes you happy” can be misleading sometime cuz that just makes a person lazy to do anything and stops from achieving great things in life.

You’ve to eat shit in the beginning for so long that you start understanding what’s going on so that you can decide if this topic is worth persuing or not.

Have realised this through experience from past 2 months. While doing tasks for project Manas I used to feel comfortable in things I already knew so I persued only that. While ignoring others on the basis of “do what you love” but then I had no choice than to complete the task- I didn’t knew shit about. I pushed myself to do things wayyyy out of my comfort zone and after eating shit for a week or 2, finally realised how beautiful the topic is and then finally even liked it more than the things I was comfortable doing.

This happened 2-3 times.

And now I’m at a point that I eat shit regardless of how pointless it is.. and I’m enjoying that process a lot.. just cuz that feeling after the initial push gives tremendous joy and feeling of success.

That’s my #tuesday_thoughts… What’s yours?

High school Math exam.

Hi! Let me tell you a story of how I was introduced to a beautiful branch of mathematics.

It was ninth standard final semester mathematics paper. As usual I finished the 3 hr paper in 1.30 hr and had to sit in the hall for another 1.30 hr. So what I did was, I started playing with numbers….

Apparently in high school all of us have used pi=22/7 whenever the radius of the geometrical object whose surface area/ volume was asked to calculate was a multiple of 7. So being a nerd with a lot of spare time with sole purpose of impressing my classmates even more (yeah they were already very impressed by the 1.30 hr paper :P) I started calculating the value of 22/7.

22/7= 3.14. okay that was easy to remember!

22/7= 3.1428.  1428-1428-1428 (smiling) alright I can remember 4 decimals!

22/7= 3.142857. 142857-2857-2857-142857 (felt like Ramanujan) okay that’d be it!

But let’s still calculate some more decimals…

22/7= 3.1428571

22/7= 3.14285714

22/7= 3.142857142

22/7= 3.1428571428 okay wait!! is this number repeating?!?

22/7= 3.142857142857…… hurrah! I can remember all the decimal places of pi! 😛

looks at the watch.. 40 mins still remaining!! bummer 😦

umm what else can I do?

what is so interesting with this number anyway? why was this particular ratio out of infinite others selected to approximate the value of an important mathematical constant such as pi? and why’s this special pattern repeating in this special fraction?

let’s do some math

142857×1= 142857

142857×2= 285714

142857×3= 428571 whoa whoa whoa what?!?! the numbers are rearranging themselves?! let’s go a bit further..

142857×4= 571428

142857×5= 714285

142857×6= 857142

okay this is mind-blowing property!

what’s 142857×7?

142857×7= 999999 what!?!? whyy!

 

 

Machine Learning.

Hi there! I see that you’re interested in Machine Learning too!

So what exactly comes to your mind when you hear Machine Learning?

Machine Learning is basically the method by which a computer(machine) learns by itself to solve real world problems.

I’ll be blogging about the course I am doing on Coursera: Andrew NG Machine Learning. I will essentially be explaining the video lectures in some more detail and probably in a way that you would understand it better.

Hope you enjoy the journey! 🙂


Machine Learning- Definition.

A program is said to learn from experience E with respect to some class of tasks T and performance measure P, if it’s performance at tasks in T, as measured by P, improves with experience E.

For example: let’s consider we’ve created a machine learning algorithm to play chess. So in it the experience E will be the experience of having the program play tons and tons of games of chess and the task T would be the task of playing game of chess and improvement measure P would be probability of the machine winning the next game against a new opponent.

Another example: let’s take up the algorithm which Google uses to segregate spam mails. So the experience E would be the data input of already segregated spam mails that users had already done manually before this algorithm came into existence. The machine will learn from this experience E. Task T would be segregating mail as spam or not spam. And the improvement measure P would be the probability of machine recognising the given set of mails into spam and not spam correctly.

Types of Machine Learning.

1.Supervised Learning.

In this type of learning question-answer pair is given so that after the machine calculates the answer for the given question it can check if it was right or not. According to the error: difference between the answer of the machine and the actual answer the machine algorithm is punished with a cost(function) so that it improves on it’s mistakes. Just as we learn new things.

Supervised Learning solves two kinds of problems:

1.1.Regression Problem.

These kinds of problems predicts continuous values, for eg: predicting cost of a house for the given land area. Cost is a continuous function and hence this type of problems comes under Regression.

1.2.Classification Problem.

These kind of problems predicts non-continuous(discrete) values, for eg: classifying if the given e-mail is spam or not. As the result of an input, mail will either be spam or not-spam. This type of problems come under Classification.

2.Unsupervised Learning.

This type of learning finds patterns which refers to clustering. Best example for this type of problem is Cocktail Party Problem:

Imagine that you’re sitting in a room which has two microphones: mic A and mic B, placed in two different corners of the room. There’s music playing in the room and you are singing the song. The audio intensity of your voice will be more than the audio intensity of the music in mic A because you’re closer to it, similarly audio intensity of the music will be more than your voice intensity in mic B. So when this audio input is fed to the machine algorithm, it identifies that there are two different audio sources in the room by finding patterns in the audio data and then processes it in such a way that it segregates both the voices such that you can hear each one of them separately, i.e. it outputs two audio files- music file & singing file.