Wednesday 3 December 2014

Coming To An End

Coming To An End
    As classes have finally come to an end it feels appropriate for me to reflect on my CSC165 experience. Coming into the course I didn't have a clue as to what to expect. The whole course was like no other course I'd ever taken before probably due to the fact that it was very theoretical. It really required a lot of thinking and deep understanding of the concepts to be successful. It wasn't so much memorizing equations and "plugging and chugging" as the phrase goes. It broadened the whole computer science view exposing us to concepts none or at least most of us hadn't ever been exposed to. I've already enrolled in CSC240 for next semester as the whole theoretical computer science field really intrigues me. Although still early on I'm planning on doing a Theory of Computation Computer Science Specialist. Overall the whole course was a pleasure to have been able to take and would definitely recommend to others. 

Mathematical Induction

Mathematical Induction
   With the introduction of simple mathematical induction comes an interesting view as opposed to that which is taught in MAT137. We were presented with the idea of using boolean values to express mathematical inductions such as if P(0) is equal to true, then mathematical induction would dictate that all values of P(x) would also be true. That assuming the function holds for all values of x. Otherwise we would get false values ridden within function results. And not so that we could pick the start point somewhere later. More so how we were taught mathematical induction in MAT137 was demonstrated really well by another student Christian in his slog entry. He took a more mathematical approach testing the base case 1, assuming the case of n and showing that the case of n + 1 holds for various functions. Overall I can see the practicality of mathematical induction for proofs and am interested in the various over mathematical induction methods.

The Halting Function

The Halting Function
    With the introduction of the halting function, it brings up an interesting concept. The way I see it is we create a function which has two outputs and then create the body of the function which emulates these two functions. So in essence the definition of the outputs is what's making such a function be impossible to implement. Furthermore when we use reduction to prove other functions as also being impossible to implement, using the halting functions definition simply allows us to make the second function meet its description as long as we somehow implement a piece of code where the function can either halt or not halt. As another student Ji stated on his blogger, even with Turing complete programming languages some programs are impossible to write. I recall reading about how in practice, some situations may be similar to the halting function in the sense that what they're trying to solve is not computable. And so the program designed would be aimed at optimizing a solution and then finding a better one. This would be run over and over so in essence its finding a better solution compared to the current solution. If a solution to the halting function were created, I'm not so sure as to the practicality of it save for those cases which model the halting problem. 
    I'm interested in further such examples of these theoretical computer science cases and am looking forward to learning more about them. In regards to the halting problem I don't see a solution arising unless the program can determine whether or not it will run infinitely. This would require quite a lot of intuition on the programs part. We've probably not gotten to that point yet in machine learning and artificial intelligence as we still haven't solved this problem yet. On that note though and with figures such as Elon Musk and Stephen Hawking warning us about advances in machine learning and artificial intelligence I'd probably lean towards not getting to that point.

Monday 17 November 2014

Big Omega

Big Omega
    With the introduction of big omega we developed a number of different related proofs. In comparison to big o proofs, big omega proofs seem as if they deal more heavily with variables in terms of other variables as opposed to concrete expressions of an nth degree. Or at least that's what differentiates our big omega examples from the big o ones. In fact I'm curious to see how we'll develop the big theta idea which will essentially combine the big o and big omega ideas into one.
    As I started working on my third assignment in CSC108, going through the programming I had the mindset of minimizing runtime. I didn't necessarily go through a whole proof analyzing the total number of steps yet I did try to reduce everything to the lowest big o notation it could be while still performing the given task. This really did help with runtime efficiency which was a problem in the CSC108 assignment two.
    Another point I'd like to bring up is the runtime analysis of various sorting algorithms. I'm curious as to whether this runtime is variable in certain situations based on what algorithm is used. I'd assume it is as it'd make sense that if certain algorithms were better suited in different situations then the runtime would vary dependent on what situation is currently occurring. All in all, I certainly appreciate the whole run time analysis of this course and see it's usefulness in computer science. 

Monday 10 November 2014

Sorting Algorithms

Sorting Algorithms
    As we have been learning about algorithms, in particular the big o definition and run time, I find that talking about various other sorting methods would be pertaining to this course. This week in CSC108 we've just begun discussing various algorithm sorting techniques. These include binary sort, bubble sort, insertion sort and selection sort. After hearing about the concept behind binary sort it got me thinking if it were possible to somehow further implement an algorithm which could divide a list n times and do a similar analysis to further speed up run time. I don't think it's possible with what we've been taught in CSC108 up until this point. Regarding the remaining three algorithms I don't have much to say as each one is better suited for a different circumstance. In regards to further big o analysis, the disproving that a function was not in O(n^2) came rather naturally. I'm quite surprised at how fast university has gone by already and am shocked that in about three weeks we'll be done with classes.

Monday 3 November 2014

The Big O - Insertion Sort Worst Case Scenario

The Big O - Insertion Sort Worst Case Scenario
    To begin, this is my interpretation of how the Big O proof works. As our focus this week was on the Big O, so will my slog be to try to better understand the concept. What we were looking at was the worst case scenario of using insertion sort on an array and the number of iterations the worst case scenario would fall in-between. As the proof goes, we would B have some value such that n or the length of our array A would be greater or equal to B. This would then imply that W of n is less than or equal to some value of c times n squared for the upper bound of the range and greater than or equal to some value of c times n squared for the lower bound of the range. From here, based on our code we overestimated the upper boundary and underestimated the lower boundary to be sure that the worst case scenario would fall within these two values. Its taken some thinking to get used to the idea but I've come to understand it.
     I'm quite curious as to how the average case and best case scenarios would work now. I've really been enjoying this class and have been thinking more and more about continuing further in this sub-field of computer science. The whole algorithmic analysis concept really intrigues me and I must say that this is probably my favorite course that I'm currently taking.

Sunday 26 October 2014

Penny Piles

Penny Piles
 
    As the problem goes, we essentially have to organize the distribution of pennies in correspondence to the two rules we are given. In other words we can only transfer pennies from one of the drawers to the other if the number of pennies in the drawers is even. My plan at first was to try and find some sort of way to prove that not all the numbers blow our threshold were obtainable. In addition, we know that we only have to prove half of the values below the threshold as when you prove one half you prove the other. For instance if you prove that you can get the number 63, you know that you can get the number one in the other drawer and so on. From here I denoted the amount of pennies in the left drawer as x and the amount of pennies in the right drawer as y with their sum having to add to 64. Then I essentially created a factor tree.
    As we must follow those two rules given to us, we can say that division is closed for these integers otherwise it would break the whole problem. If it weren't closed then we'd be able to take half of odd amounts of pennies and move them to the other side. After this point I began thinking about how we could prove that we could obtain any combination. My thought process is similar to proofs we did in class in regards to denoting some new variable as an instance of the x + y = 64 relationship. As such since integer division in this case is closed this would be correct. However the problem I'm having is proving that this applies to every value less than the threshold. Aside from showing that we can obtain every combination by actually doing it out, I haven't found a way to prove this. The route I'm thinking of taking is first showing that we can obtain all the prime numbers below the threshold and from here get every other result.
    

Sunday 12 October 2014

The Infinite Prime Number Proof


The Infinite Prime Number Proof
         As the title states, this blog will be primarily concerned with Euclid's 2500 year old proof which we began in class. Proving such a statement required us to prove the contradiction false which we began by assuming n to be an element of the natural numbers. So essentially, the set of prime numbers must be less than or equal to n. We then set m equal to the product of all the elements of P (the set of prime numbers) and deduced that m + 1 could then be a prime number. Now we have to take into account that m + 1 may or may not be a new prime number. So from here my thought process was that if it is a new prime number, this side of the proof is resolved. On the other hand if its not a new prime number then we have to somehow prove that a new prime number somehow arises from this. Now if we factor any integer we can rewrite that integer as the product of prime numbers. So if we take the number m + 1, it cannot be divided by any element of P otherwise there would be a remainder of 1. Therefore in order to write m + 1 as a product of some prime number times some other number, that prime number must be a new prime number not listed in P. To conclude, in both cases we result in new prime numbers outside of the set P and so there must be an infinite amount of prime numbers.
         Before I end the blog I'd like to state a quick thought on our recent midterm. I thought it was very straight forward and was pleasantly surprised at its difficulty. For a 60 minute test I believe it was very fair. To conclude, I'm really liking this course and aim to take more courses of the like in the future.

Saturday 4 October 2014

Introducing The Proofs

Introducing The Proofs
    Like most if not all applications of logic and reasoning, we've come to the point where we have to have the formality of proofs. In contrast with my calculus course, the proofs in this course tend to be relaxed and non-chalant in terms of structure. The fact of the matter coming down to our targeted audience. In calculus the assumption being we have to essentially explain the "why" aspect to a beginner while in this course, its targeted to someone with experience. From my perspective, its seems as thought proofs are going to become center-fold in this course. Now to the subject of transitivity and the manipulation rules, the rules seem no different than those found in common algebra coupled with some truth tables. 
    Thinking back to the problem involving the folding of the paper in half over and over again, writing a formal proof has been really thought provoking. Following what alludes to a sort of guideline, knowing what we're given can be denoted as the P(x) and the outcome of what our plan is can be denoted as our Q(x). Now to get from P(x) to Q(x) we must determine the algorithm which is the in-between. More will be coming on this as I work out the details, however the idea is there as seen in my previous log, the problem is just representing it in some sort of expression. With the results being up or down, I'm thinking that there may be some sort of way to represent the results in a truth table styled manner. This wouldn't give me the next sequential terms but rather be the rules for which various new terms arise from. Anyway, more on this as I make more progress.
    One last thought: I believe that we've been sufficiently prepared for the exam coming up and find that the tutorials have been very beneficial. 

Friday 26 September 2014

Folds and Creases

Folds and Creases
     When such a simplistic problem relating the direction of the creases in a paper folded over in over again in the same direction came  up in class, I was not expecting it to be as thought provoking as it turned out to be. At first I thought there would be some simple algorithm to make sense of the pattern of up and down creases. I began by trying to find some sort of pattern as most would. Essentially it all began to look like randomness until I thought through the whole process. With each fold a new crease is introduced into every section and whether its direction faces up or down is dependent on the adjacent creases.
     I found that I could denote various groups of creases by a number. In addition, I noticed that each group essentially pushed out from the very middle term which was down. So new groups would appear on the left and right of the middle term shifting the older terms to the left and right correspondingly. Then from here I determined that there was a correlation between the new groupings and the older ones. The newer ones were based off a combination of the older groupings.
Looking at these groupings, the whole sequences were just mirror opposites of each other with the middle "down" being the point were the mirror would be. So to further the correlation, when determining new groupings, dependent on whether the new grouping was on the left or right side, the incorporation of similar sided older groupings had no special treatment, yet opposite sided groupings needed to in essence be reflected in a mirror before the could be incorporated. So to produce for example the 5th row's new grouping on the left side, you'd take the first term to the left of the middle and divide it into two. Then you'd take the second grouping to the left and at the same time take the second grouping to the right but reflect the second grouping to the right to get its mirror image. Then by combining the second left grouping with the mirror image of the second right grouping and inserting it into the middle of the first grouping to the left, you would then produce the new term in the next sequence. The same rules apply to producing the next term on the right side for this sequence as well. The whole problem is dependent on every other section which in turn results in such properties.

Friday 19 September 2014

The Beginning to my Mathematical Expression

The Beginning to my Mathematical Expression

     Quite frankly, I wasn't sure what to expect from this class. When I enrolled for courses like calculus and biology its safe to assume I had a firm idea of what it would involve. On the contrary, I had never taken a course that placed such an emphasis on logical reasoning and expression. Not being sure what to expect, I was optimistically skeptical the first class in. That first class Professor Heap described proofs as being works of literature and this statement I believe to be fundamental. These poetic remark was thought provoking in the sense that it gave me a new insight to mathematical logic. All good works of literature have some key elements that define their success. Having little prior experience to the subject myself, I started applying this idea to not only this class but calculus as well resulting in a rather successful turnout. Giving structure to a proof gives a sense of anticipation to their mysterious nature. In other words this structure is a sort of map. If we anticipate that there will be some sort of rising action and we correlate it to some part of the proof, we know we're close to a eureka moment. That eureka moment is what leads to the falling action and the resolution. In practice, this especially helped me when I'd have moments where I'd be stuck. Taking a step back, I'd think to myself, "what's next?" Going through this logical schematic, so far I've been able to work the problems out consistently. In essence, that remark's implication is itself a puzzle which has to be deciphered using logic and reasoning.
     Furthermore, what I found really interesting was the concept of having an if statement that may not necessarily be factually correct whilst having a then statement that is correct equate the whole implication to be true. For instance if pigs can fly then the sky is blue. The reasoning to my understanding being that the sky is in fact blue to the human eye and this would remain factually correct whether or not pigs could fly. Or rather we have no basis to support that the sky wouldn't be blue if pigs in fact could fly. Progressing forward, when we reached the Venn diagrams and we began to express what the diagrams represented in notation an interesting situation came about. By not marking that a section was completely empty, the possibility that an empty set was a subset of another empty set arose. This case was then used to prove that that particular Venn diagram was done incorrectly.