Monday 17 November 2014

Big Omega

Big Omega
    With the introduction of big omega we developed a number of different related proofs. In comparison to big o proofs, big omega proofs seem as if they deal more heavily with variables in terms of other variables as opposed to concrete expressions of an nth degree. Or at least that's what differentiates our big omega examples from the big o ones. In fact I'm curious to see how we'll develop the big theta idea which will essentially combine the big o and big omega ideas into one.
    As I started working on my third assignment in CSC108, going through the programming I had the mindset of minimizing runtime. I didn't necessarily go through a whole proof analyzing the total number of steps yet I did try to reduce everything to the lowest big o notation it could be while still performing the given task. This really did help with runtime efficiency which was a problem in the CSC108 assignment two.
    Another point I'd like to bring up is the runtime analysis of various sorting algorithms. I'm curious as to whether this runtime is variable in certain situations based on what algorithm is used. I'd assume it is as it'd make sense that if certain algorithms were better suited in different situations then the runtime would vary dependent on what situation is currently occurring. All in all, I certainly appreciate the whole run time analysis of this course and see it's usefulness in computer science. 

Monday 10 November 2014

Sorting Algorithms

Sorting Algorithms
    As we have been learning about algorithms, in particular the big o definition and run time, I find that talking about various other sorting methods would be pertaining to this course. This week in CSC108 we've just begun discussing various algorithm sorting techniques. These include binary sort, bubble sort, insertion sort and selection sort. After hearing about the concept behind binary sort it got me thinking if it were possible to somehow further implement an algorithm which could divide a list n times and do a similar analysis to further speed up run time. I don't think it's possible with what we've been taught in CSC108 up until this point. Regarding the remaining three algorithms I don't have much to say as each one is better suited for a different circumstance. In regards to further big o analysis, the disproving that a function was not in O(n^2) came rather naturally. I'm quite surprised at how fast university has gone by already and am shocked that in about three weeks we'll be done with classes.

Monday 3 November 2014

The Big O - Insertion Sort Worst Case Scenario

The Big O - Insertion Sort Worst Case Scenario
    To begin, this is my interpretation of how the Big O proof works. As our focus this week was on the Big O, so will my slog be to try to better understand the concept. What we were looking at was the worst case scenario of using insertion sort on an array and the number of iterations the worst case scenario would fall in-between. As the proof goes, we would B have some value such that n or the length of our array A would be greater or equal to B. This would then imply that W of n is less than or equal to some value of c times n squared for the upper bound of the range and greater than or equal to some value of c times n squared for the lower bound of the range. From here, based on our code we overestimated the upper boundary and underestimated the lower boundary to be sure that the worst case scenario would fall within these two values. Its taken some thinking to get used to the idea but I've come to understand it.
     I'm quite curious as to how the average case and best case scenarios would work now. I've really been enjoying this class and have been thinking more and more about continuing further in this sub-field of computer science. The whole algorithmic analysis concept really intrigues me and I must say that this is probably my favorite course that I'm currently taking.