Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Python 2.6 to Smooth the Way for 3.0, Coming Next Month 184

darthcamaro writes "Some programming languages just move on to major version numbers, leaving older legacy versions (and users) behind, but that's not the plan for Python. Python 2.6 has the key goal of trying to ensure compatibility between Python 2.x and Python 3.0, which is due out in a month's time. From the article: 'Once you have your code running on 2.6, you can start getting ready for 3.0 in a number of ways,' Guido Van Rossum said. 'In particular, you can turn on "Py3k warnings," which will warn you about obsolete usage patterns for which alternatives already exist in 2.6. You can then change your code to use the modern alternative, and this will make you more ready for 3.0.'"
This discussion has been archived. No new comments can be posted.

Python 2.6 to Smooth the Way for 3.0, Coming Next Month

Comments Filter:
  • by the eric conspiracy ( 20178 ) on Friday October 03, 2008 @07:03PM (#25251947)

    Why not just wait for 3.0 to make the changes? That way you'll only have to test everything once.

    And if it's like some other languages you might have a long time to wait before 3.0.

    • by jeremiahstanley ( 473105 ) <miah@NoSpam.miah.org> on Friday October 03, 2008 @07:11PM (#25252021) Homepage

      Because the development cycle is longer than that for derivative projects. Imagine if you could have a cycled and tested app that was ready from day 0...

    • by arevos ( 659374 ) on Friday October 03, 2008 @07:46PM (#25252305) Homepage

      And if it's like some other languages you might have a long time to wait before 3.0.

      Given that the first release candidate [python.org] of Python 3.0 is already out, I doubt we'll be in for a very long wait.

    • Re: (Score:3, Informative)

      by AM088 ( 1170945 )

      I think the point is that with 2.6, your old code will work but will tell you what to change. If you move to 3.0, unless you have those changes already, it just won't work.

      • Re: (Score:2, Insightful)

        by fyngyrz ( 762201 ) *

        If you move to 3.0, unless you have those changes already, it just won't work.

        ...which is why some heavy python users, myself included, aren't going to use 2.6 or 3.0. I have huge amounts of python in operation, and the very last thing I'm going to do is break any of it with an incompatible language that happens to slightly resemble python (no matter who wrote it, and no matter what they call it, it isn't python if it can't run mundane python code.)

        Every once in a while we see one of these "brainstor

        • by tazzzzz ( 203300 ) on Friday October 03, 2008 @08:54PM (#25252751) Homepage

          ...which is why some heavy python users, myself included, aren't going to use 2.6 or 3.0. I have huge amounts of python in operation, and the very last thing I'm going to do is break any of it with an incompatible language that happens to slightly resemble python (no matter who wrote it, and no matter what they call it, it isn't python if it can't run mundane python code.)

          "slightly resemble python"? Python 3.0 code looks just like the Python that's been around for years. Maybe there's some handy new syntax (with), but it's still Python.

          This is not about fundamentally changing Python. This is about cleaning up warts, some of which have been around since Python 1.x.

          If you're going to modify a language, you *must* do it in a compatible manner, otherwise what you're doing is making a new language that will require an entirely new community. Names notwithstanding, and resemblance beyond incompatibilities notwithstanding.

          From what I've seen, the Python devs have put together about the best possible migration path while still actually making the changes that need to be made.

          Here's the picture, in case it's not clear: Python 2.6 is just as backwards compatible as the other 2.x releases. Which is to say that porting from 2.5 to 2.6 is pretty trivial. I'd expect any actively used and maintained library to be 2.6 compatible within weeks (and a great many probably didn't break at all).

          2.6 lets you use many of 3.0's features that don't break compatibility (and there are many). It also has a warnings mode to help you spot 3.0 incompatible code. And it lets you selectively turn on 3.0 features within a module.

          Want to start using the new print function?

          from __future__ import print_fiunction

          Voila! The print keyword goes away and you have the new print function. Certainly bits of new Python 3.0 syntax work now as well:

          try:
                  1/0
          except ZeroDivisionError as e:
                  pass

          The "as e" bit is new.

          Finally, there's actually a "2to3" tool that makes many of the changes in an automated fashion.

          The single biggest change from a compatibility standpoint is that "foo" is a unicode object in 3.0 and a string (set of bytes) in 2.x. You can even prepare for that switch:

          from __future__ import unicode_literals

          foo = "foo" # this will be unicode
          bar = b"bar" # this is a set of bytes
          unibar = bar.decode("utf-8") # get a unicode from the bytes

          They have put *a lot* of thought into how to make this transition. People will gradually shift to 2.6, just as they did with 2.5. And, over time, they will change to using the new features. They'll probably upgrade to 2.7 (yes, there will be one), and use the new features even more. And eventually their code will just be 3.0 code and the switch will be a no brainer.

          • No. You can go on all you want about "needed to change" and "autofix" and etc, but the bottom line is that this code presently isn't broken, and I am not about to fix code that isn't broken. It makes no sense on any level; financially, time-wise, or strategically. I have better things to do than refactor my code for entirely arbitrary reasons. Perhaps I just place a different value on my time than you do; that's fine. You should, of course, feel free to do whatever you like.

    • Not really (Score:4, Interesting)

      by widman ( 1107617 ) on Friday October 03, 2008 @08:00PM (#25252419)
      You can keep your code compatible with both at the same time. Deprecated features are trivial to rewrite in most cases. There are even tools for this.
    • by sjames ( 1099 )

      Why not just wait for 3.0 to make the changes? That way you'll only have to test everything once.

      Because 2.6 and 3.0 have different objectives.

      2.6 is simply the next in the 2.x line and one of the new features is the ability to import 3.0 features from __future__. Otherwise, it'll be no bigger a transition than 2.4 to 2.5 was. Existing programs will likely run without any issues.

      3.0 is a bigger transition. It will drop a few things now considered mis-features (if we had known then...). Most current programs will break in 3.0 (but often in ways that are trivially fixable).

      The hope is that by continuing

  • tough transitions (Score:4, Interesting)

    by AceJohnny ( 253840 ) <<jlargentaye> <at> <gmail.com>> on Friday October 03, 2008 @07:06PM (#25251991) Journal

    These kind of compatibility switches are make-or-break. I'm glad there's Python 2.6 to try to ease the problem, but Py3k means that everybody who publishes python software will all of a sudden have to maintain 2 branches, for Python 2.X line and Python 3.X line.

    This isn't the same as one software package having "legacy" and "bleeding edge" branches, because that's their own choice. In this case the underlying language is forcing them to choose.

    Honestly, I'm not confident in the economics of such transitions, and believe Py3k will die out.

    • Re: (Score:3, Interesting)

      by imbaczek ( 690596 )
      it'll take several years, but a critical mass will switch eventually IMHO.
      • Not only that but new users will pick up 3.0. Actually, I want to learn how to program and I'm waiting for 3.0 for that very reason.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Honestly, I'm not confident in the economics of such transitions, and believe Py3k will die out.

      Why would Python 3.0 'die out'? Even if you don't believe existing projects will make the switch there's no reason why new projects won't want to have the considerable benefits of using Python 3.0.

      • Re: (Score:3, Funny)

        by GooberToo ( 74388 )

        Why would Python 3.0 'die out'?

        Its widely believed a large asteroid fell from the sky and wiped the mighty python 3.0 out. ;)
         

    • by DragonWriter ( 970822 ) on Friday October 03, 2008 @07:46PM (#25252309)

      These kind of compatibility switches are make-or-break. I'm glad there's Python 2.6 to try to ease the problem, but Py3k means that everybody who publishes python software will all of a sudden have to maintain 2 branches, for Python 2.X line and Python 3.X line.

      No, they don't "have to" maintain two branches. They can choose to, or they can maintain one (which depends on their particular circumstance); if necessary (if it is an app and not a library) they can just distribute the right interpreter with the app.

      This isn't the same as one software package having "legacy" and "bleeding edge" branches, because that's their own choice.

      Yeah, actually, it is exactly the same as that, at least as long as bug-fixes and maintenance continues on Python 2.x: the "one software package" being the Python interpreter.

      And, yeah, if those maintaining python-based projects choose to maintain Python-2.x and Python-3.x based versions, that will also be an instance of exactly what you say it wouldn't be, as it will still be their own choice.

      • by GooberToo ( 74388 ) on Friday October 03, 2008 @08:12PM (#25252505)

        For whatever reason, people fail to understand python natively supports parallel installs. Furthermore, since python's preferred script magic is "#!/bin/env python", rather than, "#!/bin/python", the executing script will use the python that it finds in your path. Additionally, you can also tie python to a specific version as "python2.5". Want a different python? Change your path. A script requires a specific version of python? Change the script to require it. It's one line and trivial. It's at the top of the file, so there's no hunting even.

        New python releases only pose problems for the uninitiated, the ignorant, or the dumb.

        • by jgrahn ( 181062 ) on Saturday October 04, 2008 @03:20AM (#25254545)

          For whatever reason, people fail to understand python natively supports parallel installs. Furthermore, since python's preferred script magic is "#!/bin/env python", rather than, "#!/bin/python", the executing script will use the python that it finds in your path. Additionally, you can also tie python to a specific version as "python2.5". Want a different python? Change your path. A script requires a specific version of python? Change the script to require it. It's one line and trivial. It's at the top of the file, so there's no hunting even.

          Changing my path is not practical. It's too broad. I'd have to write a shell script wrapper for the application which did 'env PATH=new_python:$PATH the_real_application "$*"' or something. And it's not just me; I'd have to communicate this to all other users of the system somehow. And changing one line of a script is not trivial, if I'm not root.

          All this may seem like minor things, but it adds up. And no other good language puts me in situations like that.

          New python releases only pose problems for the uninitiated, the ignorant, or the dumb.

          Or those of us who have been around for a while, and seen innocent backwards-incompatible changes become maintenance nightmares ... Ok, maybe not a nightmare in this case, but an inconvenience and annoyance which will keep being inconvenient and annoying for years, until the last Python 2.x dependency goes away.

          The best way to judge this would probably be to look at what Linux distributions like Debian want to do about Python 3.0. They ship one Python as the default (2.4 currently, for Debian) but provide others too. I bet even a change from 2.4 to 2.5 is a major migration for them.

          • actually its more like this to change to a different version: ln -s /usr/bin/python2.5 /usr/bin/python
          • by afd8856 ( 700296 )

            I find it really easy to use virtualenv (sometimes together with zc.buildout) to encapsulate applications and modules. In fact, I tend to cuss when a module that I want to try doesn't offer a way to be easily integrated with virtualenv (such as an egg or at least a subversion checkout with a working setup.py package file).

          • Re: (Score:2, Informative)

            by GooberToo ( 74388 )

            Changing my path is not practical. It's too broad. I'd have to write a shell script wrapper for the application which did 'env PATH=new_python:$PATH the_real_application "$*"' or something. And it's not just me; I'd have to communicate this to all other users of the system somehow. And changing one line of a script is not trivial, if I'm not root.

            You have a system admin problem not a python problem. If you can't run system installed software and your admin refuses to help, you have an admin problem. Making

        • I want to become less uninitiated:

          For whatever reason, people fail to understand python natively supports parallel installs.

          But some popular environments (Windows, Mac, shared web hosting) identify scripts not by their script magic but instead by their file extension. When I used Google to search for python parallel install windows, I got a whole bunch of results about parallel ports and parallel processing. Does a parallel install work in Linux, Solaris, *BSD, and the like, or is there a recommended way to use it with more popular desktop operating systems such as Windows and Mac OS X? And how

          • It is the way python simply installs. Each python install places its library into a numbered directory (e.g. python2.4, python2.5). The only thing you may have to change is the "python" proper binary, which is copied from or linked to the numbered python binary.

            In other words, each python install should have its own directory structure which insures one installation doesn't effect the other. The only other issues is which binary you get when you run "python". Typically "python" proper points to the newest i

            • by tepples ( 727027 )

              The only thing you may have to change is the "python" proper binary, which is copied from or linked to the numbered python binary.

              So, under Windows, how do I force a specific .py file to use C:\python24 or C:\python25 or C:\python26 or C:\python30 upon double-click, without changing behavior of other .py files installed on the same machine? And how can I make mod_python read the #! line before loading a module?

              I can't speak for OSX but the above is true for the other platforms.

              Mac OS X should act like FreeBSD. I'm more concerned about 1. Windows, and 2. shared web hosting using mod_python and the like.

              • Admittedly, I did forget about the Windows case.

                Create multiple users, each with its own path. Use runas features. Some people use wrapper scripts to set their path. Most people seem to prefer the first option as they typically don't use the command line in the first place. If you are a command line guy, you'll likely prefer the second option.

                A third option is to use cygwin, which does honor the environment's path and magic. Some people hate cygwin. If you're are command line person on windows, you should s

        • by Nevyn ( 5505 ) *

          Furthermore, since python's preferred script magic is "#!/bin/env python", rather than, "#!/bin/python",

          It's possible that some of the python maintainers prefer that, but the distributions sure as hell don't. "Grab a random python binary that you hit first in my path" does not make for a reliable system. It destroys any idea of security (SELinux, setuid, consolehelper, etc. etc.), and I've seen more than a couple of bugs where applications stupidly used it and then someone wanted to try a newer python in

          • When you have system dependencies, that's a little different. Just install your new python ensuring your old python is still the system default python. Change your path. You're done.

            The system scripts still run. Your new scripts now run using the new python. Oppps...stuff works well and no issues exist.

    • Doesn't matter (Score:2, Interesting)

      Most distros already include the current and previous versions of Python. So Ubuntu, for instance, will include 2.6 and 3.0, and possibly 2.5 as well.

      Furthermore, you can check to see what version of Python you're running under and make your code so that it accomodates both. This is all accessible via sys.version or sys.version_info


      >>> sys.version
      '2.5.1 (r251:54863, Jul 31 2008, 22:53:39) \n[GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)]
      >>> sys.version_info
      (2, 5, 1, 'final', 0)

      With that knowledge, y

      • by kisielk ( 467327 )

        Another common pattern to use for this, as well as for libraries, is the following:


        try:
                import one_way_to_do_it
        except:
                import more_common_way_to_do_it

        • Another common pattern to use for this, as well as for libraries, is the following:


          try:
          import one_way_to_do_it
          except:
          import more_common_way_to_do_it

          But how well does a try block work with things that depend on from __future__ statements that Python 2.5.x doesn't recognize, such as the different print syntax and the different string literal syntax ("8bitchars", u"32bitchars" vs. b"8bitchars", "32bitchars")? From Python 2.5.x's definition of a future statement [python.org]:

          A future statement must appear near the top of the module. The only lines that can appear before a future statement are:

          • the module docstring (if any),
          • comments,
          • blank lines, and
          • other future statemen
    • Re: (Score:2, Funny)

      by Anonymous Coward

      Honestly, I'm not confident in the economics of such transitions, and believe Py3k will die out.

      No wireless. Less space than a nomad. Lame.

    • Re: (Score:3, Insightful)

      by xant ( 99438 )

      Uh, it's almost exactly the opposite of what you're saying. You don't have to have a Python 3.x line; you can just deploy your code on Python 2.6, keep your working application working, and do all your new development and testing with Python 3.x warnings turned on. Then your next release is Python 3.0 compatible; or if you somehow fail to do finish the Python 3.x upgrades in time for your next release, you don't have to release on Python 3.x, you can just keep using Python 2.6 even though your code is par

      • by thogard ( 43403 )

        Funny thing is that none of my production code base even runs under 2.6. I'm moving stuff from a very old server to new hardware and so far I've had to move 2.1,2.2,.2.3 and 2.4 over and some stuff broke when using the newest version of some of the old version. The result is now I have to spend lots of time maintaining programs that should not have to be maintained. I have never seen a project written in Python that meets its time or financial budget and stuff like this makes me want to ban the language

    • by sjames ( 1099 )

      I'm glad there's Python 2.6 to try to ease the problem, but Py3k means that everybody who publishes python software will all of a sudden have to maintain 2 branches, for Python 2.X line and Python 3.X line.

      If it was even slightly hard to install 2 versions of Python at the same time, that might be true. However, that's not the case. I see nothing there that will FORCE a developer to maintain two versions of their Python software.

      Most will probably stick with 2.x for now, perhaps trying out 3.x or just importing from future and playing with updating their code. By the time 2.8 is out, insisting on at least 2.6 to run your code will be perfectly reasonable. At that point, start importing from future and actuall

    • by brunson ( 91995 )

      Honestly, I'm not confident in the economics of such transitions, and believe Py3k will die out.

      Just like PHP 5?

  • What's new (Score:5, Informative)

    by ChienAndalu ( 1293930 ) on Friday October 03, 2008 @07:12PM (#25252031)
    Here are the changes [python.org].
    I really have to check out the multiprocessing package. Too bad that I have to wait for the print function and the new division handling.
    • Re: (Score:2, Informative)

      by yuriyg ( 926419 )

      Too bad that I have to wait for the print function and the new division handling.

      Huh?
      from __future__ import print_function
      from __future__ import division

      • by mgiuca ( 1040724 )

        from __future__ import division has actually worked since Python 2.2.

        It's just that Python 3.0 finally gives them an excuse to make it compulsory.

  • Cut the crap. (Score:5, Interesting)

    by Anonymous Coward on Friday October 03, 2008 @07:26PM (#25252165)

    These changes are NOT earth-shattering. 2.6 is mostly just going to add a few new features, most important being the with statement. Most code written using Python idioms will be fine under 2.6 and 3.0. Now, if you tried to write Java-esque or C-esque code under Python, you might run into issues. Even then, I doubt it. They've been deprecating features for awhile, and 3.0 is probably the point at which they'll be yanked...you've only had a year or two of DeprecationWarnings.

    I'm not sure why people whine about a language evolving. Retain backwards compatibility to a fault and you end up with C++, which is crippled by C-isms. You either know your code well enough that you could make the small incremental changes along the way, or you simply don't upgrade.

    Python most needs sane standard libraries. It is far too much of a "let's throw this in there" with three different naming conventions and no package organization. It is a shame, because the language itself is pretty powerful in the right hands.

    • Re: (Score:2, Insightful)

      by jimdread ( 1089853 )

      I'm not sure why people whine about a language evolving.

      It's because all their old code breaks. And that hurts.

      • Re:Cut the crap. (Score:4, Insightful)

        by slimjim8094 ( 941042 ) on Friday October 03, 2008 @07:59PM (#25252409)

        So don't use Python 3.0. If it's critical, you're not upgrading from a known working base anyways, right? And if it's not, this will hold your hand.

        • So don't use Python 3.0.

          That woul dbring the same problems as the transition from PHP 4 to PHP 5. How would I deploy my product to end users who have installed Python 3.x as the system-wide handler for .py files? Will Python Software Foundation recommend the use of an extension such as .py2? Conversely, if I do take advantage of Python 3.x, how would I deploy to end users who still use 2.x?

      • Isn't there a simple solution to that? I mean, someone or some group could take it upon themselves to maintain the old incarnation of the language, and then old code would continue to run fine.

  • String f**k up (Score:3, Interesting)

    by spitzak ( 4019 ) on Friday October 03, 2008 @07:41PM (#25252261) Homepage

    Reading the release, they have decided to really push 16-bit strings (they call this "Unicode" but it really is what is called UTF-16). I think this is a serious mistake.

    The proper solution is to use 8-bit strings, but any functions that care (such as I/O) should treat them as being UTF-8. Most functions do not care and thus the treatment of "Unicode" and "bytes" are the same.

    The problem with UTF-16 is you cannot losslessly convert a string that *might* be UTF-8 to UTF-16 and then back again. This is because any illegal UTF-8 byte sequences will be lost or altered. This is a MAJOR problem for code that wants to process data that is likely to be text but must not be altered under any circumstances, in effect such programs are forced to be ASCII-only, even though UTF-8 is purposly designed so that such programs could display all the Unicode characters. Note that bad UTF-16 (ie with mismatched surrogate pairs) can be losslessly converted to UTF-8 and back.

    This has been a real pain so far in our use of Python, and I am quite alarmed to see that they are changing the meaning of plain quotes in 3.0 to "Unicode". This is really a serious step backwards, as we will be forced to tell anybody using our system to put 'b' before all their string constants and I suspect there will be a lot less automatic conversion of these strings to unicode when we want to display them. Note that Qt is also causing a lot of trouble here too.

    • Re:String f**k up (Score:5, Informative)

      by Animats ( 122034 ) on Friday October 03, 2008 @08:01PM (#25252429) Homepage

      The problem is that there are three kinds of string-like objects in Python: UTF-16 strings, ASCII strings, and uninterpreted arrays of 8-bit bytes. Python 2.5 sort of supports all 3, with "array of bytes" the least well supported. Since this is a language without declarations, the semantics of this gets messy.

      The most common problem was that functions like ".read()" yielded strings, not arrays of bytes. This follows C standard library semantics, but is a bad fit to Python. In 3.0, ".read()" yields an array of bytes, not a string. If the data read is to be converted to a string, "decode" is required. That's the right answer.

      This is consistent with modern thinking about data representation. Consider SQL, which makes a similar distinction between "TEXT" and "BLOB".

      • by spitzak ( 4019 )

        Interesting. I was afraid they were making all these functions return strings. If they are returning bytes as well it would certainly make things a lot better. However I would expect them to have the same trouble I am having.

        Let's assume read returns a string of bytes. What I am worried is that the following example text will not work as expected:

        if file.read()=="utf8 string" ...

        I expect this will automatically convert the result of file.read() to UTF-16 and then do the comparison. This will

        • Re: (Score:3, Informative)

          by Animats ( 122034 )

          From What's new in Python 3.0 [python.org]: The str and bytes types cannot be mixed; you must always explicitly convert between them, using the str.encode() (str -> bytes) or bytes.decode() (bytes -> str) methods.

          That's the right way to do it, but I agree that as a retrofit to existing code, it's a headache.

          Worse, it's a problem that's detected at run time, not compile time, at least with the CPython implementation.

          • by spitzak ( 4019 )

            Well in a lot of ways that (not doing any automatic conversion) is the only correct solution if they really want plain quotes to be Unicode and not bytes/utf-8. It will be such a pain to fix existing code, though, that I would not have thought they would do that.

            • by Animats ( 122034 )

              It might be helpful to run your programs through one of the more advanced Python compilers, like Shed Skin or PyPy, if and when they get converted to Python 3.0. They have implicit type analysis, and if you get data from "read" and apply a string operation without conversion, they will usually report that as a compile-time error. So you may get to find most or all of the errors up front. CPython, being a naive interpreter, will happily compile code that will always raise an exception at run time.

          • by mrvan ( 973822 )

            I'm using python in an environment with lots of external strings (from the web, from files), and the current mechanism is horrible. I end up with non-ASCII data in strings a lot if I'm not extremely careful with thinking about which string is ASCII and which is uninterpreted bytes, and have spend endless hours debugging silly decoding problems.

            If nothing else, having the read() methods return bytes and dealing with strings as unicode objects (regardless of internal encoding, I doubt that the python spec for

          • by spitzak ( 4019 )

            Are you sure it is doing this?

            In Python 2.5.2 this works:

            >>> u"abc"=="abc"
            True

            So it would appear some kind of conversion is done automatically.

            In my opinion this means programs will port easily, but it is going to open a whole lot of nasty holes as non-equal bytes strings can appear equal when converted to UTF-16.

    • Re:String f**k up (Score:4, Informative)

      by John Millikin ( 1083757 ) on Friday October 03, 2008 @08:24PM (#25252591)
      Spoken like somebody that's never had to deal with encoding issues. Using UTF-8 internally is fine, but exposing it to the programmer is insane and error-prone. And if the programmer then proceeds to manipulate that raw byte buffer as a string, he's an idiot.

      The proper solution is to use 8-bit strings, but any functions that care (such as I/O) should treat them as being UTF-8. Most functions do not care and thus the treatment of "Unicode" and "bytes" are the same.

      You might not be aware of this, but computers are used for more than just transmitting text. I don't want my binary streams being rewritten to gibberish because some I/O routine was written to be too clever. Furthermore, not every system uses UTF-8. Some may even need to send data over a *gasp* network! Good luck getting every other computer in the world to start using UTF-8 immediately.

      The problem with UTF-16 is you cannot losslessly convert a string that *might* be UTF-8 to UTF-16 and then back again. This is because any illegal UTF-8 byte sequences will be lost or altered.

      If you try to convert bytes that aren't in UTF-8 using a UTF-8 codec, an error will be raised. This behavior is proper -- if you don't know what format your input is in, there's no way to perform text-based operations on it.

      This has been a real pain so far in our use of Python, and I am quite alarmed to see that they are changing the meaning of plain quotes in 3.0 to "Unicode".

      Every developer I know uses Unicode strings already. The new behavior is just one less character to type in front of literals.

      This is really a serious step backwards, as we will be forced to tell anybody using our system to put 'b' before all their string constants

      Otherwise said as: "We're too stupid to fix the glaring encoding errors in our product, so we'll just use bytes everywhere and pretend it's all working". Also, Unicode strings in Python are implemented with either UTF-16 or UCS-4 depending on platform.

      • Re: (Score:3, Interesting)

        by spitzak ( 4019 )

        You might not be aware of this, but computers are used for more than just transmitting text. I don't want my binary streams being rewritten to gibberish because some I/O routine was written to be too clever

        Thank you for explaining exactly why I want UTF-8 to be used, while thinking you were arguing against it.

        Data is NOT just text. Therefore we should not be mangling it because we think it is text. We have enough trouble with MSDOS inserting \r characters. This crap is a million times worse.

      • Re: (Score:2, Interesting)

        by spitzak ( 4019 )

        Spoken like somebody that's never had to deal with encoding issues. Using UTF-8 internally is fine, but exposing it to the programmer is insane and error-prone. And if the programmer then proceeds to manipulate that raw byte buffer as a string, he's an idiot.

        The compiler will turn "unicode" into the utf-8 encoding. The programmer does not see \xnn sequences of the utf-8 bytes. Try some modern compilers with utf-8 support some day before you say anything stupid again.

        Any programmer that modifies UTF-16 as a

        • (Note: I am not the grandparent)

          So, what if I'm from the UK using an editor that uses ASCII and I insert a £ into my python code or pull one from a data file? That's at code point 163 in ISO/IEC 8859-1... but if it's assumed to be utf-8, it'd be part of a multi-byte character because the first bit is set.

          • by spitzak ( 4019 )

            If you actually have the byte 163 in the file, it almost certainly will be an invalid UTF-8 encoding (it would have to be directly proceeded with an accented letter in ISO-8859-1 for it to look like legal UTF-8).

            One of the big reasons why I want the strings to remain bytes is because of exactly this. Yes the compiler can convert, but, believe it or not, we really do read text produced by other programs, often with incorrect UTF-8 encoding. Only by leaving it as bytes can we properly analyize this. It is rel

          • by spitzak ( 4019 )

            Maybe I should clear this up a bit more.

            If your editor inserted the UTF-8 encoding of two bytes (0xc2,0xa3 I think) the result should be those same two bytes. However I/O routines when told to print the string should then decode the UTF-8 and produce the pound sign. If the compiler is producing something other than UTF-8 (such as current Python does if you put a 'u' before the quote) then the compiler does the conversion, not the I/O routine. My main argument is that I think this is a job for I/O, not the c

      • by tepples ( 727027 )

        Otherwise said as: "We're too stupid to fix the glaring encoding errors in our product, so we'll just use bytes everywhere and pretend it's all working".

        Or "our handheld device has only 4 MB of RAM, and the version of Python provided by our system library vendor, which is UCS-4, would allow us to load one-fourth the text into an in-memory database".

        Also, Unicode strings in Python are implemented with either UTF-16 or UCS-4 depending on platform.

        How, when, and by whom is this decision to turn on --with-wide-unicode (UCS-4) made for each platform? What Google keywords should I have used?

        • by Tacvek ( 948259 )

          How, when, and by whom is this decision to turn on --with-wide-unicode (UCS-4) made for each platform? What Google keywords should I have used?

          Well that obviously varies by the platform. Under Debian GNU/Linux the decision would be made by the maintainer of the python package. But does it really matter? On what platform are you forced to use the python provided by the system vendor, rather than your own package?

          • by tepples ( 727027 )

            On what platform are you forced to use the python provided by the system vendor, rather than your own package?

            On platforms that verify digital signatures on executables and where certificates aren't handed out like candy. But still, for applications deployed in Europe and the Americas (not east Asia), UCS-2/UTF-16 is still significantly larger than UTF-8.

    • Re:String f**k up (Score:5, Informative)

      by belmolis ( 702863 ) <billposerNO@SPAMalum.mit.edu> on Friday October 03, 2008 @08:41PM (#25252683) Homepage

      Python does not use UTF-16 strings; it uses UCS-2 strings. The difference is that in UCS-2, every character is represented by exactly two bytes, while in UTF-16, some characters, those outside Plane 0, are represented by two "surrogate" pairs, totaling four bytes. UCS-2 does not provide any representation for characters outside the BMP. In other words, UCS-2 is a straightforward fixed length encoding, while UTF-16 is a more complex variable-length encoding.

      Python can in fact use either of two internal representations for text: UCS-2 or UTF-32 = UCS-4. If you give the option --enable-unicode=ucs4 to configure when building Python, you will get a Python that supports all of Unicode rather than just the BMP.

      • UCS-2 does not provide any representation for characters outside the BMP

        That's not quite correct. You can use characters outside the BMP, they just have messed up len and slices, since they're actually made of two pseudo-characters.

        >>> pb
        u'\U00010000'
        >>> len(pb)
        2
        >>> pb[0]
        u'\ud800'
        >>> pb[1]
        u'\udc00'
        >>> pb
        u'\U00010000'

        I would show that I was able to print it, but Slashdot hates Unicode.

    • I think the real lesson here is that byte sequences and character sequences are not the same. Every character sequence can be encoded to a byte sequence (by using an appropriate encoding), and every byte sequence can be converted to a character sequence (by means of some decoding), but they are fundamentally different things. I wonder if we wouldn't be better off making this explicit, and providing distinct string (character sequence) and blob (byte sequence) types.

      • Re:String f**k up (Score:4, Insightful)

        by spitzak ( 4019 ) on Friday October 03, 2008 @09:51PM (#25253129) Homepage

        I think the lesson is that there is ONLY byte sequences.

        The fact that some code can interpret that byte sequence and draw something on the screen that the user thinks of as "text" is completely irrelevant and should not be a fundemental datatype of a programming language. This should be part of the code that draws the text. Imagine if every other type of data, such as image pixels, or sound samples, had a different IO routine and you could never read a file with the wrong routine because the conversion was lossy.

        The real problem is that everybody's mind has been polluted by decades of ASCII where there was no difference between characters and bytes. All I can suggest is to try to think of text as words or sentences. Nobody would suggest that it would be good to make all words use the same amount of storage, or that it is important that you be unable to split a string except at word boundaries. But there has been so much use of ASCII that people think this is important for "characters".

        I also believe there is a serious politically-correctness problem. Otherwise logical programmers are consumed with guilt because Americans get the "better" short encodings, and therefore feel they have to punish themselves by making the conversion to i18n as painful as possible so that Americans have just as much trouble as anybody else. The fact that they have actually made I18N far harder for everybody and thus actually discouraged it is the ironic result of this guilt.

      • Re: (Score:3, Informative)

        by tazzzzz ( 203300 )

        Actually, this has been explicit in Python for some time. In Python 2.x, "string" objects are byte sequences and "unicode" objects are character sequences.

        What changes in Python 3.0 is that "unicode" objects have been renamed "string" and "string" objects have been renamed "bytes". So, not only is it explicit, but the naming makes more sense.

        The other related change is that string literals in your code are interpreted as Python 3.0 "string" objects ("unicode" in Python 2.x terminology), whereas previously y

    • Re:String f**k up (Score:4, Informative)

      by tazzzzz ( 203300 ) on Friday October 03, 2008 @10:09PM (#25253215) Homepage

      Reading the release, they have decided to really push 16-bit strings (they call this "Unicode" but it really is what is called UTF-16). I think this is a serious mistake.

      The proper solution is to use 8-bit strings, but any functions that care (such as I/O) should treat them as being UTF-8. Most functions do not care and thus the treatment of "Unicode" and "bytes" are the same.

      I'm going to try once more, slightly differently. Two other people apparently have tried and failed.

      Python 3.0's handling of strings is basically the same as Java's, because it has proven to work quite well there.

      For webapps, and the rules may be a little different on the desktop, "best practices" in Python for some time have been that you use unicode objects everywhere internally when you are representing text. When you hit a boundary (a file on disk, the net), you encode that unicode string into whatever encoding makes sense (often UTF-8). So far, so good, I hope?

      Python's internal representation of unicode objects is only relevant in that you need it to support whatever code points you care about. I don't think there are any code points that you can represent in UTF-8 that Python will screw up after decoding/encoding. I'm sure there are many people who would be interested to see such a test case.

      If you have a bunch of bytes that *might* be UTF-8, you're screwed. "process data that is likely to be text but must not be altered"? What do you mean by text? 7-bit ASCII? UTF-8? And where is the text coming from? Unless you tell Python the encoding of the file, you're going to get bytes out, not unicode objects.

      The whole point is that Python unicode objects know how to represent code points. If you have get a set of bytes from somewhere you *have* to know what encoding it is in order to be able to treat it as a bunch of text characters. Python unicode objects will not be "bad UTF-16". How they're stored is not generally important. What's important is that Python internally keeps track of the code points and will either successfully convert to whatever encoded sequence of bytes you want or it will raise an exception because the encoding you've chosen doesn't have one of the characters in your string.

      Python 3.0 makes this all clearer. When you talk about a "string", you're talking about a bunch of unicode characters. Anything else is a collection of bytes.

      By the way, you can specify what encoding a Python source file is in so that your string literals are all properly decoded.

      For further reading...
      http://www.joelonsoftware.com/articles/Unicode.html [joelonsoftware.com]

    • Re: (Score:3, Insightful)

      The proper solution is to do what they did: hide from the programmer what internal format is used for strings. The only time programmers should know about the encoding is when they themselves explicitly select an encoding so that they can turn a bunch of bytes into a string or when they're sending the string out into the world as a bunch of bytes. Encode and decode explicitly at the edges. Internally, hide the implementation details. It's just basic OO.

      • by amorsen ( 7485 )

        Hiding is only good if it actually works. Once you leak information about the internal encoding to the program, you have lost. Such as the length of a one-character-string sometimes being 2 -- have one program depend on that, and you can never change the supposedly hidden encoding. Of course noone would be stupid enough to return 2 when asked for the length of certain one-character-strings...

        • Here's the thing: that only happens in Python if you go outside the BMP, but even in the best character encoding scheme, unless you normalize, you can't tell if é is U+00E9 (Latin small letter e with acute) or e plus U+0301 (Combining acute accent). So, you can never really trust the length of a Unicode string.

          Would it be better if Python reported the length of non-BMP characters correctly? Yes. But, given how funky Unicode can be, it's an understandable trade off to make.

    • The problem with UTF-16 is you cannot losslessly convert a string that *might* be UTF-8 to UTF-16 and then back again. This is because any illegal UTF-8 byte sequences will be lost or altered.

      Then set strict conversion, which will raise UnicodeError for any nonconforming byte sequences. My problem with UTF-16 is how it bloats in-memory databases of mostly-ASCII text by a factor of nearly 2 (or 4 if Python is compiled with UTF-32 to handle hieroglyphics and ancient Chinese).

      • by spitzak ( 4019 )

        Throwing exceptions on bad UTF-8 strings is great if they are strings you control. It is not useful for strings provided by the outside environment. I can assure you that users want that data copied even if it contains errors, and they only want to see an error message when the data is interpreted.

        The best that could be done with exceptions is make some kind of union of the UTF-16 and the bytes (or perhaps convert the bytes by just padding each out to 16 bits), along with a flag indicating if the data conve

  • by Animats ( 122034 ) on Friday October 03, 2008 @08:10PM (#25252493) Homepage

    Many essential third party libraries need to be converted for Python 3.0. I need M2Crypto (SSL support) and MySQLdb (MySQL support), neither of which is ready for Python 3.0, and neither of which has been updated in the last year or so.

    My guess is that it will be three years before stock mainstream Linux distros come with Python 3.0 and a set of libraries that work with it.

    • Re: (Score:2, Informative)

      by Ixokai ( 443555 )

      This is quite true: but sort of irrelevant. Even the core developers on Python-dev have been seen to state on more then one occasion that they don't expect Python 3.0 to be the "standard" for a period of time that will stretch to years: one? three? The specifics don't exactly matter.

      That's why they've done the releasing of Python 2.6 and Python 3.0 in parallel (although 3.0 was recently delayed a little, the development of each have been hand in hand); they fully expect to maintain the 2.x line for awhile,

  • by xixax ( 44677 ) on Friday October 03, 2008 @08:37PM (#25252655)

    Anthony Baxter gave a pretty good talk on the implications at LCA 2008 earlier this year.

    http://video.google.com/videoplay?docid=4264641260805367198&hl=en [google.com]

  • Old news... (Score:4, Interesting)

    by pdxp ( 1213906 ) on Friday October 03, 2008 @08:47PM (#25252711)
    3.0rc1 (beta) [python.org] is already available and has been for some time now. The advantage of 2.6 is not as much its backward-compatibility but its ability to tell you exactly what needs to change (via runtime warnings) for 3.0 without actually breaking your code. I've been using both for months now, so this article isn't exactly hot news.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...