The word "never" isn't a statement. You have not proven your idea. First, you needed to put the word conversation in quotes for your claim to be valid but that invalidates your point. Part of the poster's premise was that the conversation – a dialog between 2 programs/people – can take place. It can't because to get to the "conversation" as presented it would need to be scripted and thus is not a debate or conversation anymore; it is a set piece, a fiction, unreal. Which was my point.
Where in the OP did it state - or even imply - that the conversation occurred naturally?
This fictional debate takes place in an unspecified A.I computer system between two sentient computer programs (Alpha and 0,1)
A debate is an exchange between two people (or in this supposed case AI programs) where ideas/points/arguments are exchanged. If it is not occurring naturally then it is not a debate; it is a set piece, a fiction, unreal. Which was my point.
The "assumptions" were present in the premise.
All we are told in the opening is that there are two sentient programs with the names "Alpha" and "0 1". We are not given their location. We are not given how long they have been operational. We are not given how many there are in the computer system.
They could be in a multi-server complex that spans 12 decks on the U.S.S. Enterprise E. They could have been running for one thousand years. They could have a hundred-thousand other programs with them
[1][2]. We've already gone over that fact that multiple AIs can be created without any knowledge of a programmer. You've also admitted that enough experience (gained over an unspecified period of time) would create differing responses to the same situation despite "identical twin" starting conditions.
But they will be like human identical twins; it will take a lot of experience before they significantly diverge.
And no one made these two have this conversation (although with Alpha's opening line, one might think it were coerced - but terrible writing aside...), it just happened to occur exactly one thousand years (or however long) after they were activated.
Again, none of this was specified because as a "what-if" scenario it shouldn't need to be.
I maintain that the use of the word "never" is unjustified.
---
Part of the premise underlying the original post is that the programs are a lot like people. Which would include the ability to examine their environment. Limiting or eliminating that ability creates an alternative specially designed computer system – exactly what I said would be needed. You have unintentionally supported my contention. The same with altering the programs to perceive time on human scale or just about any other special aspects you care to name.
Can humans examine everything about their environment? Anyone here able to see the ultraviolet spectrum with their built-in capabilities? There could be a programmer's signature there! We need tools that we've created in order to examine certain aspects of our environment - there is no reason for this exercise to assume the programs have developed their own equivalent technologies. Limiting their ability to examine their environment is not an obstacle. The same for perception of time - we have no true basis to contend AI should experience time at any specific rate, if at all. The fact that they are AI (
Artificial Intelligence) means anything we put them in or any decisions we make about how they experience their environment will be "specially-designed" by some metric; since there is no real base assumption or prior example one can look at for their capabilities.
I am of the opinion that this argument is simply invalid. It does not take away from the AI's ability to have a conversation any more than two imprisoned blind men
[3].
I thought that was painfully obvious to all but the original poster. But he replied to the joking about his OP as if they were serious points to be considered. To me, the logical thing was to rip his underlying premise to shreds then any discussion of the content on his part fails.
You're right, it was painfully obvious to everyone but the original poster. But are we talking to each other about it, or are we responding to him? If he thinks it's serious; then (in my opinion) the best ways to address it are to either laugh at it and hope he has a sense of humor about his own failing (he doesn't), or we address the content and show him why it fails. The goal in doing it that way is to change his thought process. Make him recalculate the results - essentially we correct the first thing on his list, invalidating the rest - and he has to defend that or accept it before we can move on.
Revise his thinking and he'll either run away
[4] or understand a little bit more.
Furthermore - it's a "What-if" scenario with a non-self-contradictory set-up. I don't really see the value in going after the premise since that doesn't actually advance the conversation.
And a correct analogy here would be like playing D&D and questioning the GM when he has a group of Klingons beam down in front of you.
I see nothing wrong with this.

But in seriousness, I still see nothing wrong with this - it's a furthering of D&D's inherent what-if scenario generation. How do the players react to this change? Do the Klingons win or do the players capture themselves a nice new Vor'cha starship? Sure, the GM is going to get some funny looks, but there is still nothing wrong here. Also; thanks for the idea. *Evil grin*
I apologize for the ad hominem attacks
I would also like to apologize - although I must admit I had fun doing it.
they are too easy to engage in. As you know.
That was kinda the point. You took a shot in the dark - I tend to respond with a barrage when the other person misses.
I seem to have found a hot button of yours and will endeavor not to use a certain word again unless I want you jumping all over my posts.
The word to be careful with is "You". I don't mind someone talking about programming and missing something - this is true for any topic. My problem is when someone takes a shot at someone else (especially if that someone else is me) about a lack of knowledge without direct provocation
[5].