Last week, we learned more about trees. After knowing the basics about trees, such as exactly what they are, and the various definitions that came along with the data type, we needed to learn how to implement them. Although we specifically learned how to implement binary trees only. These implementations can include things such as including roots, or children, obtaining the root value etc. We learned that the two most common methods of implementing trees are list of lists or by nodes and references. The list of lists approach represents each tree binary tree (a tree of 3 nodes), where the first element of the list is the root, the second is the left node, and the third is the right node. Now the right node or left node can be binary trees on their own if they have their own children, hence the list of lists representation. The nodes and references method is quite similar to the list of lists approach only there is no list being used. Instead objects of the BinaryTree class are used represent the the individual trees inside a tree.
Learning about the implementation of trees perhaps isn't the funnest thing ever, but it is interesting to notice that trees are recursive by nature. So naturally we can use the skills we learned when learning about recursion and apply them here!
Assignment one was definitely the most challenging thing this week. It did not include anything about trees, only stacks and recursion. This assignment is perhaps the most challenging assignment we've done to date (including the assignments we'd done in CSC108). The first part was pretty straight forward, but understanding how to implement (EFFECTIVELY!) a solution with not 3 stools, but 4, proved to be quite a challenge. My work lacked effectiveness, it was very efficient up to 6 cheeses, after which it drastically became less efficient. I tried several strategies to figure out how to make it simpler, such as using the GUIController to manually figure out my solutions and find a pattern, but unfortunately, that didn't help much. What I did notice was that if n - i just happened to be more than 3, then you would require the help of the starting stool to move all the n-i cheeses to an intermediate stool. This made a lot of sense, and I also proposed having two cases specifically for this. But it seemed that the case where n-i cheeses happened to be greater than 3 were, again, harder to figure out. Hopefully, I can ensure my success on the next assignment by starting earlier and perhaps thinking a lot harder.
Sunday, February 23, 2014
Sunday, February 16, 2014
These Are Not Your Regular Trees!
This week we learned about trees. Not about your regular trees, but ones we use to organize data. They are perhaps called trees because they, too, branch out.
What I particularly hate about trees is all the terminology we had to learn. There's some straightforward terminology such as parent and root, perhaps siblings as well. Others not so much such as branching factor, internal node, etc. But after revising the terminology it does start to make sense. Perhaps the biggest challenge this week was understanding the ordering types since trees can be represented as lists. There's preorder, inorder, and postorder traversal. All relative to the root of the tree, of course.
This week our lab was based on python idioms. Something we did a while back. I hadn't revised it all prior to the lab, but despite that I did not find the lab very difficult. I was able to use the python shell to my advantage and figure out what the idiom statements did.
How are trees different from other types of data structures that we've learned? Unlike everything we've learned before, namely stacks and queues, trees are different because they are not linear structures of data. They are hierarchical. We could represent lists linearly in lists, hence the different traversal types.
What I particularly hate about trees is all the terminology we had to learn. There's some straightforward terminology such as parent and root, perhaps siblings as well. Others not so much such as branching factor, internal node, etc. But after revising the terminology it does start to make sense. Perhaps the biggest challenge this week was understanding the ordering types since trees can be represented as lists. There's preorder, inorder, and postorder traversal. All relative to the root of the tree, of course.
This week our lab was based on python idioms. Something we did a while back. I hadn't revised it all prior to the lab, but despite that I did not find the lab very difficult. I was able to use the python shell to my advantage and figure out what the idiom statements did.
How are trees different from other types of data structures that we've learned? Unlike everything we've learned before, namely stacks and queues, trees are different because they are not linear structures of data. They are hierarchical. We could represent lists linearly in lists, hence the different traversal types.
Friday, February 14, 2014
More Recursion (Again, again and again)
This week we did more recursion (YAY!) Specifically, ways to trace recursive functions (easily) and more examples of recursion. One example I found to be quite interesting was the binary search example. The idea of binary search is to divide a sorted list in half, if the number you are looking for is greater than the last number of the first divided list, then you look for the number in the second half. It makes a lot of sense, which is nice to know when some things are quite confusing, especially in comp sci! So how would this work recursively? Doesn't seem too hard, in pseudocode:
- Compare the middle number to the number we are searching for
- if the number we are looking for is the middle number, then we've got it!
- if it is greater, recursively call the function on the list from the middle number (since we know that the middle number is not the number we are searching for, I suppose it is not necessary to include it in the list) to the end of the list
- if it is less, recursively call the function on the list from the beginning to the middle number, but not including the middle number.
Sunday, February 9, 2014
Recursion Again and Again and Again!
This week our main focus has been recursion. I've often asked myself the relevance of mathematical proofs and here it is in the form of recursion. Recursion has its basis in a mathematical idea called induction. I'm almost 100% sure that if I hadn't learned about induction, the concept of recursion would've been much harder to grasp. They both go hand in hand in my understanding of each concept. By that I mean recursion helps me to understand induction more, and induction helps me to understand recursion more.
Recursion isn't something that is easily grasped, and like any other concept, it needs to be practiced. Luckily, that's what labs are for! I found the lab this week to be perhaps the easiest we've done so far, which seems odd because recursion seems to be a difficult topic. I wasn't sure if I exactly understood it. I did find the lab to be slightly challenging but not as challenging as I expected. In fact, it was very straight forward.
Recursion is a useful idea but is it possible that it might not be as useful as we'd like it to be?
One of my concerns regarding recursion is its performance. Do recursive functions have a better performance than the same function that does not use recursion, or is it dependent on the function itself, ie in some functions, the recursive function has a shorter run-time. Greater research (me googling) in to the subject reveals that it is dependent on the language used. It turns out the languages like Java and C have quite a problem with recursive programs in the sense that they do require a lot of time. Although we aren't that concerned with those languages it does seem interesting to know. It turns out that recursion is also quite costly for Python as well. This would make sense because not only are you calling a recursive function once, but you're calling it as many times at is needed, which can be quite a lot! The actual reason why it is so costly is because it requires "the allocation of a new stack frame" as mentioned here. Which raises a few questions of my own like... What is a stack frame? Why does it need to be "allocated?" A stack frame is the section of memory that is used when a program is executed. This reminds me of the stack data type we learned in class... Are these in anyway related? Similar to the stack data type, we can push and pop stacks. Pushing allows us to save the contents of the register to memory and popping allows us to restore the contents of the register from memory.
So it turns out that recursion can be useful... sometimes.
Recursion isn't something that is easily grasped, and like any other concept, it needs to be practiced. Luckily, that's what labs are for! I found the lab this week to be perhaps the easiest we've done so far, which seems odd because recursion seems to be a difficult topic. I wasn't sure if I exactly understood it. I did find the lab to be slightly challenging but not as challenging as I expected. In fact, it was very straight forward.
Recursion is a useful idea but is it possible that it might not be as useful as we'd like it to be?
One of my concerns regarding recursion is its performance. Do recursive functions have a better performance than the same function that does not use recursion, or is it dependent on the function itself, ie in some functions, the recursive function has a shorter run-time. Greater research (me googling) in to the subject reveals that it is dependent on the language used. It turns out the languages like Java and C have quite a problem with recursive programs in the sense that they do require a lot of time. Although we aren't that concerned with those languages it does seem interesting to know. It turns out that recursion is also quite costly for Python as well. This would make sense because not only are you calling a recursive function once, but you're calling it as many times at is needed, which can be quite a lot! The actual reason why it is so costly is because it requires "the allocation of a new stack frame" as mentioned here. Which raises a few questions of my own like... What is a stack frame? Why does it need to be "allocated?" A stack frame is the section of memory that is used when a program is executed. This reminds me of the stack data type we learned in class... Are these in anyway related? Similar to the stack data type, we can push and pop stacks. Pushing allows us to save the contents of the register to memory and popping allows us to restore the contents of the register from memory.
So it turns out that recursion can be useful... sometimes.
Subscribe to:
Comments (Atom)