For my friends in the CIS department who will take the prelims next year: it's not as hard as they want us to believe :-P Be confident and you'll be fine.
For my friends in the CIS department who will take the prelims next year: it's not as hard as they want us to believe :-P Be confident and you'll be fine.
To get everything in context, please read the post above
Well Jeff, now you touched one of my favorite topics: Computability. First to your comment about decidability with respect to Turing machines:
The correct statement is:
"The halting problem is un-decidable with respect to Turing-machines"
You are right. But, since I love playing devil's advocate, let me add a comment here. Assuming that the Church-Turing thesis is true, then I think it is reasonable to assume that, for the most part, it is reasonable to assume that the statement "the halting problem is undecidable" implicitly assumes Turing machines (or any other machine or formalism of equivalent power, such as the Lambda Calculus) as the underlying computational device. Of course, we know that, for some simpler machines, this problem is decidable, but if this simpler device is not given as the context, I think most people would assume you are talking about a Turing Mahine-like device.
Even if you believe in Hypercomputation, you run into the halting problem (which I find fascinating). I think this is just a reflection of Gödel's Incompleteness Theorem, that is in our case, as soon as we get to a machine that is powerful enough (say the Turing Machine) from there on, you will always find programs for which certain problems cannot be decided (even if you go beyond the Turing Machine).
In any case, when asked, I will always say:
Termination is undecidable, in general.
I think the "in general" is very important here, and I add it just to emphasize that this is not necessarily true for all programs in all languages.
So, me and banazir have agreed that there exists a machine in which the halting problem is decidable
Oh sure there are! Termination is decidable for Finite State Automata, and this is a very useful type of machine. But you don't have to go so simple. The best way to get a good insight into this is by looking at the Theory of Recursive Functions. There are excellent books in the library on this subject, if you are interested. The key here is to look at the Primitive Recursive Functions. For example, I have defined a programming language, that for now I call Primitive (for Primitive Recursive Language, and in fact I'm still working on it) for which any program defined in the language is provably terminating. The language has recursion, but only primitive recursion.
The language provide facilities for defining inductively defined data structures. For example, a list:
type list = nil | 'a * list
Then, you can define functions on recursively defined data:
fun rec listFunction : (nil -> 'b, 'a * 'b -> 'b);
The idea is that you provide a function that does something on the base case (empty list) and then on the inductive case. The type of the function is a tuple of two functions. The 'rec' modifier states that this is a recursive function, and every function with this modifier must be of that type ( is actually more general and more complicated, but this gives the idea). Then when invoked on a list, the corresponding function is applied (depending on the case) and then if the list is not empty, the function is called recursively on the tail of the list. You can define this for arbitrary inductively defined types. All programs in this language terminate, and yet you have recursion. This type of recursion is called primitive recursion.
Primitive is a very powerful language, but it's not Turing-complete. However, there is a wide range programs that you can write in this language. A big subset of programs (specially if you focus in non-interactive programs, and I'll say why in a minute) that you can write in other languages, can be written in Primitive. However, this very simple (and very common) pattern cannot be expressed in this languages:
"Option 1" => doOp1()
"Option 2" => doOp2()
"Quit" => exit()
The code above is a very simple interface, and interactive program. This pattern is very common for interactive code, that is, code that responds to an event. The program above is non-terminating, in general, for it depends on a specific external event to terminate. I could just as well turn my computer on, start that program, and leave, and that program would never terminate. To prove that the program above terminates you need to prove a liveness property, that is, s will eventually be "Quit". Other than this, most programs that compute mathematical functions are expressible in Primitive (Notice the emphasis on most, for this is not true in general, most notably, the Ackermann Function cannot be computed by any Primitive program).
So, in summary, every undecidable problem with respect to a given computational formalism, has decidable subsets. The good thing is that humans seem to be more interested in the computable functions than on the non-computable ones, at least for practical purposes.
Now, to answer your questions:
1. Would a machine based halting function be of use if it pointed out whether functions in a popular language, say C++, halted or was unknown?
I take you mean here if a procedure for determining whether a program in popular language terminates (which would return "yes", "no", or "can't determine"), would be useful. Well sure, I think it would find an application, probably in specific domains. For example, imagine a resource-limited machine that runs remote provided remotely, that is, it downloads code provided by another entity. Suppose this code was compiled from a general purpose programming language. Now, since you have resource constraints, you don't want to run a program that runs for ever. You can always impose a time limit. But maybe, you don't care so much about the time as much as it terminates. Besides, suppose the time limit is a day, you wouldn't want to spend a day running a program that is just trapped in a loop. In this scenario, non-termination is considered an error. If you could some sort of code-verifier that could determine in advance whether the program terminates, that would be useful. Now, this is something I just made up, abd I can't really think of a general application for such procedure. But I'm sure it could be applied.
2. How much, or what of, functionality would be lost in a language where halting was decidable. The language would of course not be able to produce a UTM.
Well, I think I partially answered to this in my description above about the Primitive language. You don't necessarily lose a lot of expressive power, but you lose some, of course. It all depends on the problem you want to solve.
3. Is this interesting?
Are you kidding? Of course it is! In fact, a lot of people think so, for this is an active area of research. A year or so ago, Neil Jones gave a talk on this subject (the name of the talk was "Solving the Halting Problem", if I recall correctly). If you are interested, you should read his work.
4. Do you think that, in light of my arguments, that undecidability may cut of potentially useful computer science research?
Well, I guess I answered this in the answer to the previous question. In short: Yes.
Computability is a very interesting topic of research too. But if you ask me, I think you have more natural skills for algorithmics, specially given your strong mathematical background. You should think about it.
Computability is just such a fascinating subject. I love just thinking about it, and I revisit it from time to time. A particularly interesting topic is hypercomputation, over which scottharmon, chriszhong and I have spent countless hours of discussion :-)
Man, do I live Computer Science!
To get everything in context please read the post above
OK Jeff, I think you missed part of the point I tried to make in part because I missed part of the point you tried to make, or rather, I didn't explain well in my previous reply. First of all, I will start by saying that I agree with you in many of the points you discussed at the beginning, but I disagree with others. The problem is that I focused on those I disagree with. To avoid the same situation, I will start by pointing out in what is it that I agree with you.
I agree with you that many CS students don't know the basics and they should know them. That is, as computer scientists, we need to know the fundamentals, starting of course from computer architecture. In fact, I have felt the same frustration you mentioned and right now I recall an occasion in which I was discussing some CS topic in front of several CS students (I think it was heap symmetry reductions, if I recall correctly) and someone asked what was 'heap'. I just couldn't believe my ears, but the fact is there are several students in this situation, even at the graduate level. So I agree with you on this, but I don't think this is because of a bad curriculum... I think it is just because of sloppy students...
One reason we should know all these details, is to get rid of them :-) And here is where we disagree. See, developers take a language like C, and learn to love all these little tricks that boost performance and cherish them as a beloved secret, symbol of power. They get so deep into this that they don't understand that this is just PLAIN WRONG! Now as developers, they must learn this things if they want to develop good software. But just as the engineer will always know the fine details much better than a physicist, a developers will know these much better than a computer scientist. But it was a computer scientist who designed the language that he masters now. It was a computer scientist who worked out the details of how it should be implemented. Do you think all these details were crossing Ritchie's mind when he was designing C? I think not. More than half of the C programmers out there are probably several orders of magnitude better than him.
Now, we don't have to know all of the details either, only the most fundamentals, and those related to our areas of research. For example, I don't know if you have made real time systems programming, but if you haven't, you have no idea of all the implementation 'tricks' that boost efficiency that only software engineers working in that area know. Now if you tell me of a software engineer that doesn't know about the details you were mentioning, that is bad. That doesn't mean the developer knows more than the scientist. Well, he certainly knows more about his thing, that is, developing software, but the computer scientist is an expert on something else of course. Say for example, an AI researcher. He's not going to spend his time figuring or making sure he knows what is the most efficient way to make a Bayesian Networks implementation, he will probably prefer spend his time trying to discover more efficient algorithms (complexity wise), and let the implementation to that knows how to do it better: the software engineer. He will then move on to the next research subject.
Now, there will be a computer scientist out there whose research area is compiler optimization, and for him, it is really important to know all of these details, TO GET RID OF THEM. Or someone who is interested in programming languages and needs to know of this to make sure he doesn't make the same mistake in the programming language he is designing.
Now of course, nice abstractions usually come at the cost of efficiency. Pointer arithmetic is evil and just plain wrong. From a programming language point of view, a language that explicitly exposes physical memory addresses is just plain wrong! Now, until we can get efficient programs out of languages that restrict this feature, this is a necessary evil. And so, we still have C/C++, and they are the first choice when performance is an issue (usually time performance). But even in this languages, it should be our jobs to hunt down these atrocities and exterminate them. For example, you give me the example of the matrices multiplication (which is very similar to the old bidimensional array update problem) and you think it is cool that because you know the internals you can get a better performance. But I tell you: it is cool that you know this and are able to get better code, but it is just horrible that such a situation exist. And here is the line that separates developers from computer scientists: developers will see this as something beautiful in the language, computer scientists will see it as something horrible.
And that is the lesson that Dijkstra left us: WE CAN DO BETTER THAN THAT. We can do better than having to rearrange indices in nested for loops to minimize the amount of page faults: we can create a compiler that detect this and replaces with the most efficient version. That way, the semantics of execution are not exposed in the syntax of the program. This is equivalent to tail recursion in functional languages. If you are a developer of functional languages and you don't make a conscious effort to write most of your recursive functions such that they are tail recursive then you are a bad developer. Now I see this and say: "This is wrong, why don't we have a compiler that will transform a recursive function into a tail recursive one", we might not get away with it all the time, but if can do it most of time, we are fine. That's why I love such efforts as those behind Stackless Python. It's just beautiful. THAT is beauty for me.
This is also the reason why programmers that are used to such programming languages (C/C++) are reluctant to languages like Java. They want the fine grained control. They don't want garbage collection, they want to deallocate memory themselves. Yet their programs are full of memory leaks. Give me a moderately complex C/C++ program and I'll give you a memory leak. They don't want array bounds check, yet their programs segfault in the most unexpected circumstances. And the list goes on and on...
Of course there is much to do. I remember you mentioned a while ago how funny you thought it was that they were saying that Java was getting close to 90% of the performance of C/C++. If you take performance as the only comparison criteria, then you're right, there's nothing to celebrate. But what this people were celebrating was that there we have a programming language that is free of most of the atrocities of C/C++, and yet we get comparable performance. In turn, absence of all the ugly features means less bugs, which means that at the end we have better programs, and we only have to resort to C/C++ when the performance is so critical that that 10% makes a difference.
Now, we obviously have different point of views in this, and I respect yours, specially because you seem to be a good software developer, and as far as I can tell you are a smart guy.
Now to address some of your comments:
Wouldn't it be a sad day when presented with a problem, the "code monkey" produces a better solution. Of course, here, "better" is faster.
First, by code monkey here we use the common usage of this jargon: a hacker, a code munching human, someone whose expertise is just writing tight code. Now, to the statement above: it depends of the problem at hand. If the problem is to design an asymptotically efficient algorithm for a given problem, then yes, you would expect the computer scientist to outperform the developer. However, the developer always loyal to the "monkey see, monkey do" rule, should get close for he should have picked up something by being a software engineer. On the other hand, if given this algorithm, the task is to implement an efficient version of this algorithm, then you would expect the software engineer to come up with a better implementation. But again, you would expect the scientist to get close to the solution provided by the developer. Now if either of them are way off from the solution provided by the other, in each case, then that would be sad.
As I have yet to figure out how to place an object on the stack, which, oh my, is very useful, Java isn’t great
Well, that's the point, we shouldn't have to worry where to allocate the objects, the compiler should be smart enough to figure where to put them. As of now, the Java compiler doesn't do this and it is a shame, but there is work on it. Here is a paper I read a while back that is very relevant to this:
The idea is to use escape analysis to determine which objects can be allocated in the stack. There are several publications following this one, and the technique is implemented in High Performance Compilers for Java (in particular, this is for IBM's Java compiler for embedded applications).
I will finish with a few of notes. First, I agree with you to some extent, but I think you are totally shifted to one extreme (as there is people who like the other extreme, that is, forget totally about the details and stay in the abstract, which I don't consider a good point of view either). I think we need to be able to see in the big, but also in the small. To quote Donald Knuth, a great computer scientist:
... the psychological profiling [of a programmer] is mostly the ability to shift levels of abstraction, from low level to high level. To see something in the small and to see something in the large.
About Dijkstra, he couldn't be more right when he said what he said about computer science and computers. Computers are just the instrument from which we make science, there is an inter-relationship, but computer science is not just about computers. Heck he should know. He was one of the greatest computer scientists of all times. And just to give you an idea about how much he cared about efficiency, it was him who came up with the incredibly fast algorithm for finding a shortest path, the so called "Dijkstra's algorithm" that made him famous. Dijkstra main argument was: we don't have to sacrifice elegance for efficiency, elegance should our main design criteria when writing software. And it isn't Dijkstra's algorithm one of the most elegant pieces of science in CS? It was him who argued that programmers should think of programs in terms of their weakest-precondition (the so called Dijkstra's weakest precondition). It is that elegance that separates us from the monkeys :-)
Finally, I am a bit of a cowboy myself sometimes. I love Perl, and if there is an ugly language, that is Perl. But I love the power it gives me, the freedom that I experiment in being able to write programs in a very concise manner, to the point of being almost cryptic, or well structured and totally comprehensive, my choice. I love C for this same reason: the ability to beyond the evident :-) Feeling like you can almost touch that memory cell, manipulating that structure byte by byte, bit by bit. That's the kind of freedom that promotes ingenuity, originality, something that only the most skilled programmers appreciate. So, from the programmers point of view, these languages are needed, but not from the scientist point of view. And to put in better words this eternal conflict, I quote one of my personal idols, Larry Wall:
Many computer scientists have fallen into the trap of trying to define languages like George Orwell's Newspeak, in which it is impossible to think bad thoughts. What they end up doing is killing the creativity of programming.
I totally agree with the quote above. So, we need variety of programming languages. Some with lots of freedom, some without so much of it. Each language has its particular domain to which suits better.
As a Programmer, I love C and Perl; as a Computer Scientist I hate them :-)
You are a Samurai.
You are full of honour and value respect. You
are not really the stereotypical hero, but you
do fight for good. Just in your own way. For
you, it is most certainly okay to kill an evil
person, if it is for justice and peace. You
also don't believe in mourning all the time and
think that once you've hit a bad stage in life
you just have to get up again. It's pointless
to concentrate on emotional pain and better to
just get on with everything. You also are a
down to earth type of person and think before
you act. Impulsive people may annoy you
Main weapon: Sword
Quote: "Always do the right thing.
This will gratify some people and astonish the
rest" -Mark Twain
Facial expression: Small smile
What Type of Killer Are You? [cool pictures]
brought to you by Quizilla
- What's going on?
- There's something wrong with the Laptop...
- What do you mean?
- Something really wrong!!!
So I came up and as soon as I look at the screen I utter the first words that naturally come to my mind:
- Blue Screen of Death (sigh)...
- Don't worry, everything's fine...
- WHAT DO YOU MEAN DON'T WORRY??? IT'S THE SCREEN OF DEATH!!!!!!!! WHAT'S WRONG? IS THE COMPUTER DEAD?????
- (Laugh) No, no... it's normal, it's just a "Windows thing"...
- But it never happened before!
- Well, you sure have seen the blue screen... remember? your previous computer with Windows98? It showed it all the time...
- But it never happened before with this computer and Windows XP (comment: she's had the computer for 3 months)
- Well, with Windows... this is bound to happen, sooner or later... actually rather sooner than later...
- Ah OK... what should I do?
- Just restart the computer...
The amusing thing (at least for me) was how she panicked when I said the words "Blue Screen of Death" :-)... Oh well, I guess she won't panic next time this happens...
When a P man loves an NP woman
With a simple polynomial brain,
I contented myself with P problems,
And always looked at NP with disdain.
Fell in love with a polynomial woman,
But with a non-deterministic wit,
She said she would marry me only,
If I could show her that P=NP.
I rushed to the library and studied,
Asked Garey & Johnson for a hint to the truth,
They said "this is quite a hard question",
But none of them had a hint or a clue.
Went to church and prayed to The Almighty,
"Please Oh Lord, give me a lead the truth",
"Don't waste your time son", a voice said laughing,
For I myself on this wasted my youth.
First oracle says you will marry,
Second oracle says you will split,
Time moves, paths branch, results may vary,
Accept the state that will finally fit.
If you finally marry this girl,
And P=NP was the truth,
What a Chaos: Salesmen traveling cheaply!,
And mathematicians with nothing to do!
If you really want to marry this woman,
Then randomness might be the only key,
But please stop praying for an answer,
For I could not decide on this P=NP!
First of all, and I think this might seem relevant for most people with regard to this subject, I am catholic. I say "this might seem relevant" because religious background has been cited consistently as a source of bias in this subject. And although I can't contend that my religious beliefs might influence my decision in one way or another (because after all, it forms part of my ideologies), I don't think religion played a decisive role on my opinion over this matter. I can sense a little grin and a "yeah right" from most people when they read this claim, but let me make my case. To start with, I am not a conventional catholic in several respects. For example, I did not grow up in a catholic family, unlike most catholics. I think this is an important factors. The way I see it, most members of most religions have "inherited" their religious beliefs from significant family members (mainly from parents). These people tend to be the most devoted. However, the way I see it, a catholic (or any other religion practicer) "by inheritance" would have practiced any other religion, had he/she be born in a family with a different religion. This is not true for all children in a religous family (it also depends on the devotion of the family), but since these individuals are generally intrinsically religious, they apprehend unto this religion with great devotion, because it is the closer source of religion.
I was born in a non-religious family, and up to this day, I am the only "religiou" person in my family. Another thing that separates me from most catholics is that, my decision to become a catholic was out of full conscious decision, product of exploration. Because of the inheritance phenomenon I just cited, most religious people have little knowledge of other religions (usually they limit to know the little things that will disqualify them , or make them "heretic", on their eyes). For example, I've heard from non-catholics "catholics are polytheists" because of the concept of Saints, but if I ask them to tell me what a Saint is, they will have no good answer. Similarly, I've heard catholics say this or that religion is not truly "chritian" because they "don't believe in christ", but when asked of the details of this, they will have little to say. Instead, as I just said, my decision of being a catholic came after an exploration process in which, I studied and more or less practiced many different religions, in and out of christianism. For example, my hobby of practicing "Astral Projection" is something I got when studying eastern philosophies and religions.
My decision of being a catholic was influenced, more or less, by two major facts:
- Of the two religious philisophers that have had more influence on me, namely Jesus of Nazareth (Christ) and Siddhartha Gautama (Buddha), Jesus is definitely the one I am more aligned with. Besides, from my point of view, Gautama's teachings are subsumed by Jesus' teachings. Therefore, I idetified myself with christianism.
- Tolerance is an important concept for me, and of all the christian churches, the catholic church is (nowadays), from my perspective, the most tolerant one, at least when it comes to other religions.
The third, and most decisive factor that led me to become a catholic was that, during the exploration phase, the most intense religious and spiritual experiences I had, took place in the catholic church. However, by catholic standards, I am liberal. So, I think of myself as a liberal catholic. On the other hand, when compared to plain liberals, I rank as a conservative.
Now, continuing with my background, I do not consider myself a homophobic. In fact, often times you will find me arguing in favor of tolerance for, not only homosexuals, but other discriminated groups (see "tolerance is important for me" above). I regret that homosexual people are the target of so much stress and pressure that comes from society's rejection, forcing them to hide and negate a part of themselves. I think that this clandestinity that they some of them feel they have to live in, might be a cause for the elevated promiscuity sometimes seen in some homosexual groups, making them a vulnerable target for sexually transmitted diseases. A few years ago, someone who highly appreciated died from AIDS. I didn't know of his homosexuality until after I knew he was sick. The illness practically destroyed him, it was a very hard thing to see. He was a fine friend, and someone who cared about people when others didn't. He was a responsible person, faculty and head of the Dept. of Electrical Eng. where I studied. He was the kind of professor that would develop a personal relationship with students, because he cared about them... something I don't see much in other professors I've met. He helped me several times. From my point of view, he was not the kind of person that would have a promiscuous sexual life. I could never stop wondering what would have been of him had he lived in an environment where he could freely express his sexuality, and carry an open, healthy relationship? Maybe he'd still be alive, helping people, or maybe not... To be fair, I must also say that I've had some quite bad experiences with homosexuals, but I never judge whole groups out of the actions of a few members of that group.
Now, all that being said:
( Here is what I would have voted forCollapse )
Very, very interesting...
I think both philosophies (open vs. closed source) can coexist and that there is a place for each of them. Whenever it gets commercial, I think that closed source will almost always be the most viable way. Whenever you have some technical advantage, you want to keep it away from your competitors. But then again, this can create monopolies, which discourage innovation. So, there's always a subtle balance that must be maintained. Take Windows as an example. I think that no one can deny that Windows has gotten better over the last few years. Why? Well, serious contenders entered the scene (Linux), and so they had to start concerning about quality. Before that... what was the incentive to innovation? None. So, closing the source can give you an edge over your competitors, but can be detrimental for users.
Another advantage of proprietary software, in a commercial context, is that you have pressure to improve. You have stakeholders to respond to. Your money is at risk. So, in a healthy competitive environment, there is pressure to innovate. But then again, closing the source code can damage that healthy competition. This is something that is not present in open source software. There is no such pressure. Contributors don't risk really anything, so that makes the contribution easy... but may also lower the standards. That's why so many open source projects don't really take off or make an impact.
Now, open source is about freedom. Pure science and contribution. People who contribute don't make it for the money, they contribute out of pure ideals. Because they truly love the science... because they want to make it. Their satisfaction comes from within, so questions like "how do you make money out of this" are just not applicable. These computer scientists are the Galileos, Bernoullis, Fermat, Turings of today... they do it because they love computers, they love software, and they want to advance the state of the start. It is innovation in its purest version. Whereas thousands of mediocre programmers hide behind proprietary companies, and software engineering teams, in the open software enviroment they can't hide. Just as less than brillian mathematicians can't shine in the presence of the big ones. In this environment, knowledge and not money or status is respected. That is why anything less than that is neglected... that's why you don't see nice UIs in open source software, because that's just less important. Not challenging, just decoration. However, if you want to sell, you need to take care of this. The truth is that in open source software more often than not, the end user is neglected. If you are to use open source software, you need to be a "real man". Don't come whining about difficult interfaces or crappy UIs... heck, who needs UIs, real man use the command line!... that is the mentality. That is why Linux hasn't got to the desktop... and it might never do so...
But... that's fine!!! That is just OK. Because there is a place for everything. I just got a computer for my wife, and I put Windows on it. The truth is I wouldn't even try to convince her to use Linux... because that would be just stupid. Linux is not a desktop OS. I like tinkering with my computer... so I want an OS that lets me do so, that encourages me to do so! But she doesn't :-) She needs office utilities that are of commercial quality. I don't really care about that...
The philosophy behind open source software can be summarized in Dennis Ritchie's famous quote:
"Unix is basically a simple operating system, but you have to be a genius to understand the simplicity."
You can substitute Linux for Unix above :-) Most Windows users probably won't know who the heck is Dennis Ritchie. But I'm sure all Linux users will know who he is. That will tell you all...
Now, I am a geek. Proud of it :-) I love open source sofware because it represents what I believe in: knowledge pursuit for the advancement of science and humanity, and not for the enlargement of a few people's pockets. But that's just me.
So, let me finish by summarize the pros and cons of each of the two sides:
- Most viable for commercial products.
- It's the best way to maintain an advantage over competitors.
- Competition forces innovation.
- Because production techniques of the competition are hidden, forces greater investment on R&D which promotes creativity.
- Control of the program is centralized so it's easier to manage lifecycle of the program.
- Easier to impose any specific development paradigm.
- Because the effort is more user oriented, user support tends to be more personalized.
- Because its mostly associates to commercial products, promotes more customer oriented approach.
- Commercially speaking, promotes monopoly.
- Since nobody is looking, and the incentive is purely economical, it may be easier to commit plagiarism, that is, steal someone else's intellectual property.
- Without proper competition, innovation is discouraged (no incentive to innovate in terms of profits).
- Project is usually controlled by organizations, so intellectual property at the individual level tends to be minimal or completely lost.
- Intellectual environment is overriden by "organization philosophies" in most cases. Therefore, products from the same organization tend to look the same, and have the same problems.
- May promote poor quality standards: peer review is minimal if not absent. Companies have to invest in QA, but if there's not strong competition, no incentive to do so (e.g., BSOD).
- Maximal peer review.
- Encourages innovation, and spread of knowledge.
- Encourages cross-collaboration between different projects (no one is competing).
- Because everybody is looking, it's harder to commit plagiarism.
- High educational value because any individual can potentially learn by just looking.
- Promotes the creation of communities around similar interests, who help each other to solve problems (a form of user support).
- Intellectual environment is very diverse.
- In most cases, every individual gets credit for their work (line of code by line of code).
- Might not be very viable for commercial efforts.
- Very hard to control destiny of the software (e.g., forks).
- Since there is no external pressure, and project relies mostly on volunteers (and whatever time they can put on it), development process can take very long.
- The focus is usually the "big picture" problem, no incentive for "decoration" (better UIs, user friendly features).
- User support is no personalized, so if there is an urgent problem it is either do it yourself or wait until someone has time to give you a hand.
- Since there is no competition, incentive to impose quality standards may be minimal (however, in practice, many open source projects implement good QA measures).
- The intellectual environment may be very diverse, but this may lead to conflicts that may take long to resolve and delay development. This promotes the appearance of "dictators" which in turn, depending on the leading capabilities of these leaders, may diminish intellectual diversity.
What's the conclusion? Every project is different in nature. Depending on your needs, evaluate the circumstances, pros and cons of each philosophy, and take the approach that better suits your expectations. For me, most of the times (if not all the times :-)) it will be open source. But this might not be true for everyone.
Well, that's my take on this subject. Feel free to comment if you agree and/or disagree with any concept portrayed above :-)