You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1918 lines
52 KiB

36 years ago
36 years ago
36 years ago
Add warning mode for classic division, almost exactly as specified in PEP 238. Changes: - add a new flag variable Py_DivisionWarningFlag, declared in pydebug.h, defined in object.c, set in main.c, and used in {int,long,float,complex}object.c. When this flag is set, the classic division operator issues a DeprecationWarning message. - add a new API PyRun_SimpleStringFlags() to match PyRun_SimpleString(). The main() function calls this so that commands run with -c can also benefit from -Dnew. - While I was at it, I changed the usage message in main() somewhat: alphabetized the options, split it in *four* parts to fit in under 512 bytes (not that I still believe this is necessary -- doc strings elsewhere are much longer), and perhaps most visibly, don't display the full list of options on each command line error. Instead, the full list is only displayed when -h is used, and otherwise a brief reminder of -h is displayed. When -h is used, write to stdout so that you can do `python -h | more'. Notes: - I don't want to use the -W option to control whether the classic division warning is issued or not, because the machinery to decide whether to display the warning or not is very expensive (it involves calling into the warnings.py module). You can use -Werror to turn the warnings into exceptions though. - The -Dnew option doesn't select future division for all of the program -- only for the __main__ module. I don't know if I'll ever change this -- it would require changes to the .pyc file magic number to do it right, and a more global notion of compiler flags. - You can usefully combine -Dwarn and -Dnew: this gives the __main__ module new division, and warns about classic division everywhere else.
25 years ago
36 years ago
23 years ago
36 years ago
36 years ago
36 years ago
36 years ago
36 years ago
Merged revisions 53451-53537 via svnmerge from svn+ssh://pythondev@svn.python.org/python/trunk ........ r53454 | brett.cannon | 2007-01-15 20:12:08 +0100 (Mon, 15 Jan 2007) | 3 lines Add a note for strptime that just because strftime supports some extra directive that is not documented that strptime will as well. ........ r53458 | vinay.sajip | 2007-01-16 10:50:07 +0100 (Tue, 16 Jan 2007) | 1 line Updated rotating file handlers to use _open(). ........ r53459 | marc-andre.lemburg | 2007-01-16 14:03:06 +0100 (Tue, 16 Jan 2007) | 2 lines Add news items for the recent pybench and platform changes. ........ r53460 | sjoerd.mullender | 2007-01-16 17:42:38 +0100 (Tue, 16 Jan 2007) | 4 lines Fixed ntpath.expandvars to not replace references to non-existing variables with nothing. Also added tests. This fixes bug #494589. ........ r53464 | neal.norwitz | 2007-01-17 07:23:51 +0100 (Wed, 17 Jan 2007) | 1 line Give Calvin Spealman access for python-dev summaries. ........ r53465 | neal.norwitz | 2007-01-17 09:37:26 +0100 (Wed, 17 Jan 2007) | 1 line Remove Calvin since he only has access to the website currently. ........ r53466 | thomas.heller | 2007-01-17 10:40:34 +0100 (Wed, 17 Jan 2007) | 2 lines Replace C++ comments with C comments. ........ r53472 | andrew.kuchling | 2007-01-17 20:55:06 +0100 (Wed, 17 Jan 2007) | 1 line [Part of bug #1599254] Add suggestion to Mailbox docs to use Maildir, and warn user to lock/unlock mailboxes when modifying them ........ r53475 | georg.brandl | 2007-01-17 22:09:04 +0100 (Wed, 17 Jan 2007) | 2 lines Bug #1637967: missing //= operator in list. ........ r53477 | georg.brandl | 2007-01-17 22:19:58 +0100 (Wed, 17 Jan 2007) | 2 lines Bug #1629125: fix wrong data type (int -> Py_ssize_t) in PyDict_Next docs. ........ r53481 | neal.norwitz | 2007-01-18 06:40:58 +0100 (Thu, 18 Jan 2007) | 1 line Try reverting part of r53145 that seems to cause the Windows buildbots to fail in test_uu.UUFileTest.test_encode ........ r53482 | fred.drake | 2007-01-18 06:42:30 +0100 (Thu, 18 Jan 2007) | 1 line add missing version entry ........ r53483 | neal.norwitz | 2007-01-18 07:20:55 +0100 (Thu, 18 Jan 2007) | 7 lines This test doesn't pass on Windows. The cause seems to be that chmod doesn't support the same funcationality as on Unix. I'm not sure if this fix is the best (or if it will even work)--it's a test to see if the buildbots start passing again. It might be better to not even run this test if it's windows (or non-posix). ........ r53488 | neal.norwitz | 2007-01-19 06:53:33 +0100 (Fri, 19 Jan 2007) | 1 line SF #1635217, Fix unbalanced paren ........ r53489 | martin.v.loewis | 2007-01-19 07:42:22 +0100 (Fri, 19 Jan 2007) | 3 lines Prefix AST symbols with _Py_. Fixes #1637022. Will backport. ........ r53497 | martin.v.loewis | 2007-01-19 19:01:38 +0100 (Fri, 19 Jan 2007) | 2 lines Add UUIDs for 2.5.1 and 2.5.2 ........ r53499 | raymond.hettinger | 2007-01-19 19:07:18 +0100 (Fri, 19 Jan 2007) | 1 line SF# 1635892: Fix docs for betavariate's input parameters . ........ r53503 | martin.v.loewis | 2007-01-20 15:05:39 +0100 (Sat, 20 Jan 2007) | 2 lines Merge 53501 and 53502 from 25 branch: Add /GS- for AMD64 and Itanium builds where missing. ........ r53504 | walter.doerwald | 2007-01-20 18:28:31 +0100 (Sat, 20 Jan 2007) | 2 lines Port test_resource.py to unittest. ........ r53505 | walter.doerwald | 2007-01-20 19:19:33 +0100 (Sat, 20 Jan 2007) | 2 lines Add argument tests an calls of resource.getrusage(). ........ r53506 | walter.doerwald | 2007-01-20 20:03:17 +0100 (Sat, 20 Jan 2007) | 2 lines resource.RUSAGE_BOTH might not exist. ........ r53507 | walter.doerwald | 2007-01-21 00:07:28 +0100 (Sun, 21 Jan 2007) | 2 lines Port test_new.py to unittest. ........ r53508 | martin.v.loewis | 2007-01-21 10:33:07 +0100 (Sun, 21 Jan 2007) | 2 lines Patch #1610575: Add support for _Bool to struct. ........ r53509 | georg.brandl | 2007-01-21 11:28:43 +0100 (Sun, 21 Jan 2007) | 3 lines Bug #1486663: don't reject keyword arguments for subclasses of builtin types. ........ r53511 | georg.brandl | 2007-01-21 11:35:10 +0100 (Sun, 21 Jan 2007) | 2 lines Patch #1627441: close sockets properly in urllib2. ........ r53517 | georg.brandl | 2007-01-22 20:40:21 +0100 (Mon, 22 Jan 2007) | 3 lines Use new email module names (#1637162, #1637159, #1637157). ........ r53518 | andrew.kuchling | 2007-01-22 21:26:40 +0100 (Mon, 22 Jan 2007) | 1 line Improve pattern used for mbox 'From' lines; add a simple test ........ r53519 | andrew.kuchling | 2007-01-22 21:27:50 +0100 (Mon, 22 Jan 2007) | 1 line Make comment match the code ........ r53522 | georg.brandl | 2007-01-22 22:10:33 +0100 (Mon, 22 Jan 2007) | 2 lines Bug #1249573: fix rfc822.parsedate not accepting a certain date format ........ r53524 | georg.brandl | 2007-01-22 22:23:41 +0100 (Mon, 22 Jan 2007) | 2 lines Bug #1627316: handle error in condition/ignore pdb commands more gracefully. ........ r53526 | lars.gustaebel | 2007-01-23 12:17:33 +0100 (Tue, 23 Jan 2007) | 4 lines Patch #1507247: tarfile.py: use current umask for intermediate directories. ........ r53527 | thomas.wouters | 2007-01-23 14:42:00 +0100 (Tue, 23 Jan 2007) | 13 lines SF patch #1630975: Fix crash when replacing sys.stdout in sitecustomize When running the interpreter in an environment that would cause it to set stdout/stderr/stdin's encoding, having a sitecustomize that would replace them with something other than PyFile objects would crash the interpreter. Fix it by simply ignoring the encoding-setting for non-files. This could do with a test, but I can think of no maintainable and portable way to test this bug, short of adding a sitecustomize.py to the buildsystem and have it always run with it (hmmm....) ........ r53528 | thomas.wouters | 2007-01-23 14:50:49 +0100 (Tue, 23 Jan 2007) | 4 lines Add news entry about last checkin (oops.) ........ r53531 | martin.v.loewis | 2007-01-23 22:11:47 +0100 (Tue, 23 Jan 2007) | 4 lines Make PyTraceBack_Here use the current thread, not the frame's thread state. Fixes #1579370. Will backport. ........ r53535 | brett.cannon | 2007-01-24 00:21:22 +0100 (Wed, 24 Jan 2007) | 5 lines Fix crasher for when an object's __del__ creates a new weakref to itself. Patch only fixes new-style classes; classic classes still buggy. Closes bug #1377858. Already backported. ........ r53536 | walter.doerwald | 2007-01-24 01:42:19 +0100 (Wed, 24 Jan 2007) | 2 lines Port test_popen.py to unittest. ........
19 years ago
36 years ago
Changes to recursive-object comparisons, having to do with a test case I found where rich comparison of unequal recursive objects gave unintuituve results. In a discussion with Tim, where we discovered that our intuition on when a<=b should be true was failing, we decided to outlaw ordering comparisons on recursive objects. (Once we have fixed our intuition and designed a matching algorithm that's practical and reasonable to implement, we can allow such orderings again.) - Refactored the recursive-object comparison framework; more is now done in the support routines so less needs to be done in the calling routines (even at the expense of slowing it down a bit -- this should normally never be invoked, it's mostly just there to avoid blowing up the interpreter). - Changed the framework so that the comparison operator used is also stored. (The dictionary now stores triples (v, w, op) instead of pairs (v, w).) - Changed the nesting limit to a more reasonable small 20; this only slows down comparisons of very deeply nested objects (unlikely to occur in practice), while speeding up comparisons of recursive objects (previously, this would first waste time and space on 500 nested comparisons before it would start detecting recursion). - Changed rich comparisons for recursive objects to raise a ValueError exception when recursion is detected for ordering oprators (<, <=, >, >=). Unrelated change: - Moved PyObject_Unicode() to just under PyObject_Str(), where it belongs. MAL's patch must've inserted in a random spot between two functions in the file -- between two helpers for rich comparison...
25 years ago
16 years ago
16 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
36 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
36 years ago
36 years ago
34 years ago
36 years ago
36 years ago
36 years ago
Restructure comparison dramatically. There is no longer a default *ordering* between objects; there is only a default equality test (defined by an object being equal to itself only). Read the comment in object.c. The current implementation never uses a three-way comparison to compute a rich comparison, but it does use a rich comparison to compute a three-way comparison. I'm not quite done ripping out all the calls to PyObject_Compare/Cmp, or replacing tp_compare implementations with tp_richcompare implementations; but much of that has happened (to make most unit tests pass). The following tests still fail, because I need help deciding or understanding: test_codeop -- depends on comparing code objects test_datetime -- need Tim Peters' opinion test_marshal -- depends on comparing code objects test_mutants -- need help understanding it The problem with test_codeop and test_marshal is this: these tests compare two different code objects and expect them to be equal. Is that still a feature we'd like to support? I've temporarily removed the comparison and hash code from code objects, so they use the default (equality by pointer only) comparison. For the other two tests, run them to see for yourself. (There may be more failing test with "-u all".) A general problem with getting lots of these tests to pass is the reality that for object types that have a natural total ordering, implementing __cmp__ is much more convenient than implementing __eq__, __ne__, __lt__, and so on. Should we go back to allowing __cmp__ to provide a total ordering? Should we provide some other way to implement rich comparison with a single method override? Alex proposed a __key__() method; I've considered a __richcmp__() method. Or perhaps __cmp__() just shouldn't be killed off...
20 years ago
36 years ago
36 years ago
36 years ago
36 years ago
36 years ago
36 years ago
36 years ago
36 years ago
36 years ago
26 years ago
  1. /* Generic object operations; and implementation of None (NoObject) */
  2. #include "Python.h"
  3. #include "frameobject.h"
  4. #ifdef __cplusplus
  5. extern "C" {
  6. #endif
  7. #ifdef Py_REF_DEBUG
  8. Py_ssize_t _Py_RefTotal;
  9. Py_ssize_t
  10. _Py_GetRefTotal(void)
  11. {
  12. PyObject *o;
  13. Py_ssize_t total = _Py_RefTotal;
  14. /* ignore the references to the dummy object of the dicts and sets
  15. because they are not reliable and not useful (now that the
  16. hash table code is well-tested) */
  17. o = _PyDict_Dummy();
  18. if (o != NULL)
  19. total -= o->ob_refcnt;
  20. o = _PySet_Dummy();
  21. if (o != NULL)
  22. total -= o->ob_refcnt;
  23. return total;
  24. }
  25. #endif /* Py_REF_DEBUG */
  26. int Py_DivisionWarningFlag;
  27. /* Object allocation routines used by NEWOBJ and NEWVAROBJ macros.
  28. These are used by the individual routines for object creation.
  29. Do not call them otherwise, they do not initialize the object! */
  30. #ifdef Py_TRACE_REFS
  31. /* Head of circular doubly-linked list of all objects. These are linked
  32. * together via the _ob_prev and _ob_next members of a PyObject, which
  33. * exist only in a Py_TRACE_REFS build.
  34. */
  35. static PyObject refchain = {&refchain, &refchain};
  36. /* Insert op at the front of the list of all objects. If force is true,
  37. * op is added even if _ob_prev and _ob_next are non-NULL already. If
  38. * force is false amd _ob_prev or _ob_next are non-NULL, do nothing.
  39. * force should be true if and only if op points to freshly allocated,
  40. * uninitialized memory, or you've unlinked op from the list and are
  41. * relinking it into the front.
  42. * Note that objects are normally added to the list via _Py_NewReference,
  43. * which is called by PyObject_Init. Not all objects are initialized that
  44. * way, though; exceptions include statically allocated type objects, and
  45. * statically allocated singletons (like Py_True and Py_None).
  46. */
  47. void
  48. _Py_AddToAllObjects(PyObject *op, int force)
  49. {
  50. #ifdef Py_DEBUG
  51. if (!force) {
  52. /* If it's initialized memory, op must be in or out of
  53. * the list unambiguously.
  54. */
  55. assert((op->_ob_prev == NULL) == (op->_ob_next == NULL));
  56. }
  57. #endif
  58. if (force || op->_ob_prev == NULL) {
  59. op->_ob_next = refchain._ob_next;
  60. op->_ob_prev = &refchain;
  61. refchain._ob_next->_ob_prev = op;
  62. refchain._ob_next = op;
  63. }
  64. }
  65. #endif /* Py_TRACE_REFS */
  66. #ifdef COUNT_ALLOCS
  67. static PyTypeObject *type_list;
  68. /* All types are added to type_list, at least when
  69. they get one object created. That makes them
  70. immortal, which unfortunately contributes to
  71. garbage itself. If unlist_types_without_objects
  72. is set, they will be removed from the type_list
  73. once the last object is deallocated. */
  74. static int unlist_types_without_objects;
  75. extern Py_ssize_t tuple_zero_allocs, fast_tuple_allocs;
  76. extern Py_ssize_t quick_int_allocs, quick_neg_int_allocs;
  77. extern Py_ssize_t null_strings, one_strings;
  78. void
  79. dump_counts(FILE* f)
  80. {
  81. PyTypeObject *tp;
  82. for (tp = type_list; tp; tp = tp->tp_next)
  83. fprintf(f, "%s alloc'd: %" PY_FORMAT_SIZE_T "d, "
  84. "freed: %" PY_FORMAT_SIZE_T "d, "
  85. "max in use: %" PY_FORMAT_SIZE_T "d\n",
  86. tp->tp_name, tp->tp_allocs, tp->tp_frees,
  87. tp->tp_maxalloc);
  88. fprintf(f, "fast tuple allocs: %" PY_FORMAT_SIZE_T "d, "
  89. "empty: %" PY_FORMAT_SIZE_T "d\n",
  90. fast_tuple_allocs, tuple_zero_allocs);
  91. fprintf(f, "fast int allocs: pos: %" PY_FORMAT_SIZE_T "d, "
  92. "neg: %" PY_FORMAT_SIZE_T "d\n",
  93. quick_int_allocs, quick_neg_int_allocs);
  94. fprintf(f, "null strings: %" PY_FORMAT_SIZE_T "d, "
  95. "1-strings: %" PY_FORMAT_SIZE_T "d\n",
  96. null_strings, one_strings);
  97. }
  98. PyObject *
  99. get_counts(void)
  100. {
  101. PyTypeObject *tp;
  102. PyObject *result;
  103. PyObject *v;
  104. result = PyList_New(0);
  105. if (result == NULL)
  106. return NULL;
  107. for (tp = type_list; tp; tp = tp->tp_next) {
  108. v = Py_BuildValue("(snnn)", tp->tp_name, tp->tp_allocs,
  109. tp->tp_frees, tp->tp_maxalloc);
  110. if (v == NULL) {
  111. Py_DECREF(result);
  112. return NULL;
  113. }
  114. if (PyList_Append(result, v) < 0) {
  115. Py_DECREF(v);
  116. Py_DECREF(result);
  117. return NULL;
  118. }
  119. Py_DECREF(v);
  120. }
  121. return result;
  122. }
  123. void
  124. inc_count(PyTypeObject *tp)
  125. {
  126. if (tp->tp_next == NULL && tp->tp_prev == NULL) {
  127. /* first time; insert in linked list */
  128. if (tp->tp_next != NULL) /* sanity check */
  129. Py_FatalError("XXX inc_count sanity check");
  130. if (type_list)
  131. type_list->tp_prev = tp;
  132. tp->tp_next = type_list;
  133. /* Note that as of Python 2.2, heap-allocated type objects
  134. * can go away, but this code requires that they stay alive
  135. * until program exit. That's why we're careful with
  136. * refcounts here. type_list gets a new reference to tp,
  137. * while ownership of the reference type_list used to hold
  138. * (if any) was transferred to tp->tp_next in the line above.
  139. * tp is thus effectively immortal after this.
  140. */
  141. Py_INCREF(tp);
  142. type_list = tp;
  143. #ifdef Py_TRACE_REFS
  144. /* Also insert in the doubly-linked list of all objects,
  145. * if not already there.
  146. */
  147. _Py_AddToAllObjects((PyObject *)tp, 0);
  148. #endif
  149. }
  150. tp->tp_allocs++;
  151. if (tp->tp_allocs - tp->tp_frees > tp->tp_maxalloc)
  152. tp->tp_maxalloc = tp->tp_allocs - tp->tp_frees;
  153. }
  154. void dec_count(PyTypeObject *tp)
  155. {
  156. tp->tp_frees++;
  157. if (unlist_types_without_objects &&
  158. tp->tp_allocs == tp->tp_frees) {
  159. /* unlink the type from type_list */
  160. if (tp->tp_prev)
  161. tp->tp_prev->tp_next = tp->tp_next;
  162. else
  163. type_list = tp->tp_next;
  164. if (tp->tp_next)
  165. tp->tp_next->tp_prev = tp->tp_prev;
  166. tp->tp_next = tp->tp_prev = NULL;
  167. Py_DECREF(tp);
  168. }
  169. }
  170. #endif
  171. #ifdef Py_REF_DEBUG
  172. /* Log a fatal error; doesn't return. */
  173. void
  174. _Py_NegativeRefcount(const char *fname, int lineno, PyObject *op)
  175. {
  176. char buf[300];
  177. PyOS_snprintf(buf, sizeof(buf),
  178. "%s:%i object at %p has negative ref count "
  179. "%" PY_FORMAT_SIZE_T "d",
  180. fname, lineno, op, op->ob_refcnt);
  181. Py_FatalError(buf);
  182. }
  183. #endif /* Py_REF_DEBUG */
  184. void
  185. Py_IncRef(PyObject *o)
  186. {
  187. Py_XINCREF(o);
  188. }
  189. void
  190. Py_DecRef(PyObject *o)
  191. {
  192. Py_XDECREF(o);
  193. }
  194. PyObject *
  195. PyObject_Init(PyObject *op, PyTypeObject *tp)
  196. {
  197. if (op == NULL)
  198. return PyErr_NoMemory();
  199. /* Any changes should be reflected in PyObject_INIT (objimpl.h) */
  200. Py_TYPE(op) = tp;
  201. _Py_NewReference(op);
  202. return op;
  203. }
  204. PyVarObject *
  205. PyObject_InitVar(PyVarObject *op, PyTypeObject *tp, Py_ssize_t size)
  206. {
  207. if (op == NULL)
  208. return (PyVarObject *) PyErr_NoMemory();
  209. /* Any changes should be reflected in PyObject_INIT_VAR */
  210. op->ob_size = size;
  211. Py_TYPE(op) = tp;
  212. _Py_NewReference((PyObject *)op);
  213. return op;
  214. }
  215. PyObject *
  216. _PyObject_New(PyTypeObject *tp)
  217. {
  218. PyObject *op;
  219. op = (PyObject *) PyObject_MALLOC(_PyObject_SIZE(tp));
  220. if (op == NULL)
  221. return PyErr_NoMemory();
  222. return PyObject_INIT(op, tp);
  223. }
  224. PyVarObject *
  225. _PyObject_NewVar(PyTypeObject *tp, Py_ssize_t nitems)
  226. {
  227. PyVarObject *op;
  228. const size_t size = _PyObject_VAR_SIZE(tp, nitems);
  229. op = (PyVarObject *) PyObject_MALLOC(size);
  230. if (op == NULL)
  231. return (PyVarObject *)PyErr_NoMemory();
  232. return PyObject_INIT_VAR(op, tp, nitems);
  233. }
  234. int
  235. PyObject_Print(PyObject *op, FILE *fp, int flags)
  236. {
  237. int ret = 0;
  238. if (PyErr_CheckSignals())
  239. return -1;
  240. #ifdef USE_STACKCHECK
  241. if (PyOS_CheckStack()) {
  242. PyErr_SetString(PyExc_MemoryError, "stack overflow");
  243. return -1;
  244. }
  245. #endif
  246. clearerr(fp); /* Clear any previous error condition */
  247. if (op == NULL) {
  248. Py_BEGIN_ALLOW_THREADS
  249. fprintf(fp, "<nil>");
  250. Py_END_ALLOW_THREADS
  251. }
  252. else {
  253. if (op->ob_refcnt <= 0)
  254. /* XXX(twouters) cast refcount to long until %zd is
  255. universally available */
  256. Py_BEGIN_ALLOW_THREADS
  257. fprintf(fp, "<refcnt %ld at %p>",
  258. (long)op->ob_refcnt, op);
  259. Py_END_ALLOW_THREADS
  260. else {
  261. PyObject *s;
  262. if (flags & Py_PRINT_RAW)
  263. s = PyObject_Str(op);
  264. else
  265. s = PyObject_Repr(op);
  266. if (s == NULL)
  267. ret = -1;
  268. else if (PyBytes_Check(s)) {
  269. fwrite(PyBytes_AS_STRING(s), 1,
  270. PyBytes_GET_SIZE(s), fp);
  271. }
  272. else if (PyUnicode_Check(s)) {
  273. PyObject *t;
  274. t = PyUnicode_EncodeUTF8(PyUnicode_AS_UNICODE(s),
  275. PyUnicode_GET_SIZE(s),
  276. "backslashreplace");
  277. if (t == NULL)
  278. ret = 0;
  279. else {
  280. fwrite(PyBytes_AS_STRING(t), 1,
  281. PyBytes_GET_SIZE(t), fp);
  282. Py_DECREF(t);
  283. }
  284. }
  285. else {
  286. PyErr_Format(PyExc_TypeError,
  287. "str() or repr() returned '%.100s'",
  288. s->ob_type->tp_name);
  289. ret = -1;
  290. }
  291. Py_XDECREF(s);
  292. }
  293. }
  294. if (ret == 0) {
  295. if (ferror(fp)) {
  296. PyErr_SetFromErrno(PyExc_IOError);
  297. clearerr(fp);
  298. ret = -1;
  299. }
  300. }
  301. return ret;
  302. }
  303. /* For debugging convenience. Set a breakpoint here and call it from your DLL */
  304. void
  305. _Py_BreakPoint(void)
  306. {
  307. }
  308. /* For debugging convenience. See Misc/gdbinit for some useful gdb hooks */
  309. void
  310. _PyObject_Dump(PyObject* op)
  311. {
  312. if (op == NULL)
  313. fprintf(stderr, "NULL\n");
  314. else {
  315. #ifdef WITH_THREAD
  316. PyGILState_STATE gil;
  317. #endif
  318. fprintf(stderr, "object : ");
  319. #ifdef WITH_THREAD
  320. gil = PyGILState_Ensure();
  321. #endif
  322. (void)PyObject_Print(op, stderr, 0);
  323. #ifdef WITH_THREAD
  324. PyGILState_Release(gil);
  325. #endif
  326. /* XXX(twouters) cast refcount to long until %zd is
  327. universally available */
  328. fprintf(stderr, "\n"
  329. "type : %s\n"
  330. "refcount: %ld\n"
  331. "address : %p\n",
  332. Py_TYPE(op)==NULL ? "NULL" : Py_TYPE(op)->tp_name,
  333. (long)op->ob_refcnt,
  334. op);
  335. }
  336. }
  337. PyObject *
  338. PyObject_Repr(PyObject *v)
  339. {
  340. PyObject *res;
  341. if (PyErr_CheckSignals())
  342. return NULL;
  343. #ifdef USE_STACKCHECK
  344. if (PyOS_CheckStack()) {
  345. PyErr_SetString(PyExc_MemoryError, "stack overflow");
  346. return NULL;
  347. }
  348. #endif
  349. if (v == NULL)
  350. return PyUnicode_FromString("<NULL>");
  351. if (Py_TYPE(v)->tp_repr == NULL)
  352. return PyUnicode_FromFormat("<%s object at %p>",
  353. v->ob_type->tp_name, v);
  354. res = (*v->ob_type->tp_repr)(v);
  355. if (res != NULL && !PyUnicode_Check(res)) {
  356. PyErr_Format(PyExc_TypeError,
  357. "__repr__ returned non-string (type %.200s)",
  358. res->ob_type->tp_name);
  359. Py_DECREF(res);
  360. return NULL;
  361. }
  362. return res;
  363. }
  364. PyObject *
  365. PyObject_Str(PyObject *v)
  366. {
  367. PyObject *res;
  368. if (PyErr_CheckSignals())
  369. return NULL;
  370. #ifdef USE_STACKCHECK
  371. if (PyOS_CheckStack()) {
  372. PyErr_SetString(PyExc_MemoryError, "stack overflow");
  373. return NULL;
  374. }
  375. #endif
  376. if (v == NULL)
  377. return PyUnicode_FromString("<NULL>");
  378. if (PyUnicode_CheckExact(v)) {
  379. Py_INCREF(v);
  380. return v;
  381. }
  382. if (Py_TYPE(v)->tp_str == NULL)
  383. return PyObject_Repr(v);
  384. /* It is possible for a type to have a tp_str representation that loops
  385. infinitely. */
  386. if (Py_EnterRecursiveCall(" while getting the str of an object"))
  387. return NULL;
  388. res = (*Py_TYPE(v)->tp_str)(v);
  389. Py_LeaveRecursiveCall();
  390. if (res == NULL)
  391. return NULL;
  392. if (!PyUnicode_Check(res)) {
  393. PyErr_Format(PyExc_TypeError,
  394. "__str__ returned non-string (type %.200s)",
  395. Py_TYPE(res)->tp_name);
  396. Py_DECREF(res);
  397. return NULL;
  398. }
  399. return res;
  400. }
  401. PyObject *
  402. PyObject_ASCII(PyObject *v)
  403. {
  404. PyObject *repr, *ascii, *res;
  405. repr = PyObject_Repr(v);
  406. if (repr == NULL)
  407. return NULL;
  408. /* repr is guaranteed to be a PyUnicode object by PyObject_Repr */
  409. ascii = PyUnicode_EncodeASCII(
  410. PyUnicode_AS_UNICODE(repr),
  411. PyUnicode_GET_SIZE(repr),
  412. "backslashreplace");
  413. Py_DECREF(repr);
  414. if (ascii == NULL)
  415. return NULL;
  416. res = PyUnicode_DecodeASCII(
  417. PyBytes_AS_STRING(ascii),
  418. PyBytes_GET_SIZE(ascii),
  419. NULL);
  420. Py_DECREF(ascii);
  421. return res;
  422. }
  423. PyObject *
  424. PyObject_Bytes(PyObject *v)
  425. {
  426. PyObject *result, *func;
  427. static PyObject *bytesstring = NULL;
  428. if (v == NULL)
  429. return PyBytes_FromString("<NULL>");
  430. if (PyBytes_CheckExact(v)) {
  431. Py_INCREF(v);
  432. return v;
  433. }
  434. func = _PyObject_LookupSpecial(v, "__bytes__", &bytesstring);
  435. if (func != NULL) {
  436. result = PyObject_CallFunctionObjArgs(func, NULL);
  437. Py_DECREF(func);
  438. if (result == NULL)
  439. return NULL;
  440. if (!PyBytes_Check(result)) {
  441. PyErr_Format(PyExc_TypeError,
  442. "__bytes__ returned non-bytes (type %.200s)",
  443. Py_TYPE(result)->tp_name);
  444. Py_DECREF(result);
  445. return NULL;
  446. }
  447. return result;
  448. }
  449. else if (PyErr_Occurred())
  450. return NULL;
  451. return PyBytes_FromObject(v);
  452. }
  453. /* For Python 3.0.1 and later, the old three-way comparison has been
  454. completely removed in favour of rich comparisons. PyObject_Compare() and
  455. PyObject_Cmp() are gone, and the builtin cmp function no longer exists.
  456. The old tp_compare slot has been renamed to tp_reserved, and should no
  457. longer be used. Use tp_richcompare instead.
  458. See (*) below for practical amendments.
  459. tp_richcompare gets called with a first argument of the appropriate type
  460. and a second object of an arbitrary type. We never do any kind of
  461. coercion.
  462. The tp_richcompare slot should return an object, as follows:
  463. NULL if an exception occurred
  464. NotImplemented if the requested comparison is not implemented
  465. any other false value if the requested comparison is false
  466. any other true value if the requested comparison is true
  467. The PyObject_RichCompare[Bool]() wrappers raise TypeError when they get
  468. NotImplemented.
  469. (*) Practical amendments:
  470. - If rich comparison returns NotImplemented, == and != are decided by
  471. comparing the object pointer (i.e. falling back to the base object
  472. implementation).
  473. */
  474. /* Map rich comparison operators to their swapped version, e.g. LT <--> GT */
  475. int _Py_SwappedOp[] = {Py_GT, Py_GE, Py_EQ, Py_NE, Py_LT, Py_LE};
  476. static char *opstrings[] = {"<", "<=", "==", "!=", ">", ">="};
  477. /* Perform a rich comparison, raising TypeError when the requested comparison
  478. operator is not supported. */
  479. static PyObject *
  480. do_richcompare(PyObject *v, PyObject *w, int op)
  481. {
  482. richcmpfunc f;
  483. PyObject *res;
  484. int checked_reverse_op = 0;
  485. if (v->ob_type != w->ob_type &&
  486. PyType_IsSubtype(w->ob_type, v->ob_type) &&
  487. (f = w->ob_type->tp_richcompare) != NULL) {
  488. checked_reverse_op = 1;
  489. res = (*f)(w, v, _Py_SwappedOp[op]);
  490. if (res != Py_NotImplemented)
  491. return res;
  492. Py_DECREF(res);
  493. }
  494. if ((f = v->ob_type->tp_richcompare) != NULL) {
  495. res = (*f)(v, w, op);
  496. if (res != Py_NotImplemented)
  497. return res;
  498. Py_DECREF(res);
  499. }
  500. if (!checked_reverse_op && (f = w->ob_type->tp_richcompare) != NULL) {
  501. res = (*f)(w, v, _Py_SwappedOp[op]);
  502. if (res != Py_NotImplemented)
  503. return res;
  504. Py_DECREF(res);
  505. }
  506. /* If neither object implements it, provide a sensible default
  507. for == and !=, but raise an exception for ordering. */
  508. switch (op) {
  509. case Py_EQ:
  510. res = (v == w) ? Py_True : Py_False;
  511. break;
  512. case Py_NE:
  513. res = (v != w) ? Py_True : Py_False;
  514. break;
  515. default:
  516. /* XXX Special-case None so it doesn't show as NoneType() */
  517. PyErr_Format(PyExc_TypeError,
  518. "unorderable types: %.100s() %s %.100s()",
  519. v->ob_type->tp_name,
  520. opstrings[op],
  521. w->ob_type->tp_name);
  522. return NULL;
  523. }
  524. Py_INCREF(res);
  525. return res;
  526. }
  527. /* Perform a rich comparison with object result. This wraps do_richcompare()
  528. with a check for NULL arguments and a recursion check. */
  529. PyObject *
  530. PyObject_RichCompare(PyObject *v, PyObject *w, int op)
  531. {
  532. PyObject *res;
  533. assert(Py_LT <= op && op <= Py_GE);
  534. if (v == NULL || w == NULL) {
  535. if (!PyErr_Occurred())
  536. PyErr_BadInternalCall();
  537. return NULL;
  538. }
  539. if (Py_EnterRecursiveCall(" in comparison"))
  540. return NULL;
  541. res = do_richcompare(v, w, op);
  542. Py_LeaveRecursiveCall();
  543. return res;
  544. }
  545. /* Perform a rich comparison with integer result. This wraps
  546. PyObject_RichCompare(), returning -1 for error, 0 for false, 1 for true. */
  547. int
  548. PyObject_RichCompareBool(PyObject *v, PyObject *w, int op)
  549. {
  550. PyObject *res;
  551. int ok;
  552. /* Quick result when objects are the same.
  553. Guarantees that identity implies equality. */
  554. if (v == w) {
  555. if (op == Py_EQ)
  556. return 1;
  557. else if (op == Py_NE)
  558. return 0;
  559. }
  560. res = PyObject_RichCompare(v, w, op);
  561. if (res == NULL)
  562. return -1;
  563. if (PyBool_Check(res))
  564. ok = (res == Py_True);
  565. else
  566. ok = PyObject_IsTrue(res);
  567. Py_DECREF(res);
  568. return ok;
  569. }
  570. /* Set of hash utility functions to help maintaining the invariant that
  571. if a==b then hash(a)==hash(b)
  572. All the utility functions (_Py_Hash*()) return "-1" to signify an error.
  573. */
  574. /* For numeric types, the hash of a number x is based on the reduction
  575. of x modulo the prime P = 2**_PyHASH_BITS - 1. It's designed so that
  576. hash(x) == hash(y) whenever x and y are numerically equal, even if
  577. x and y have different types.
  578. A quick summary of the hashing strategy:
  579. (1) First define the 'reduction of x modulo P' for any rational
  580. number x; this is a standard extension of the usual notion of
  581. reduction modulo P for integers. If x == p/q (written in lowest
  582. terms), the reduction is interpreted as the reduction of p times
  583. the inverse of the reduction of q, all modulo P; if q is exactly
  584. divisible by P then define the reduction to be infinity. So we've
  585. got a well-defined map
  586. reduce : { rational numbers } -> { 0, 1, 2, ..., P-1, infinity }.
  587. (2) Now for a rational number x, define hash(x) by:
  588. reduce(x) if x >= 0
  589. -reduce(-x) if x < 0
  590. If the result of the reduction is infinity (this is impossible for
  591. integers, floats and Decimals) then use the predefined hash value
  592. _PyHASH_INF for x >= 0, or -_PyHASH_INF for x < 0, instead.
  593. _PyHASH_INF, -_PyHASH_INF and _PyHASH_NAN are also used for the
  594. hashes of float and Decimal infinities and nans.
  595. A selling point for the above strategy is that it makes it possible
  596. to compute hashes of decimal and binary floating-point numbers
  597. efficiently, even if the exponent of the binary or decimal number
  598. is large. The key point is that
  599. reduce(x * y) == reduce(x) * reduce(y) (modulo _PyHASH_MODULUS)
  600. provided that {reduce(x), reduce(y)} != {0, infinity}. The reduction of a
  601. binary or decimal float is never infinity, since the denominator is a power
  602. of 2 (for binary) or a divisor of a power of 10 (for decimal). So we have,
  603. for nonnegative x,
  604. reduce(x * 2**e) == reduce(x) * reduce(2**e) % _PyHASH_MODULUS
  605. reduce(x * 10**e) == reduce(x) * reduce(10**e) % _PyHASH_MODULUS
  606. and reduce(10**e) can be computed efficiently by the usual modular
  607. exponentiation algorithm. For reduce(2**e) it's even better: since
  608. P is of the form 2**n-1, reduce(2**e) is 2**(e mod n), and multiplication
  609. by 2**(e mod n) modulo 2**n-1 just amounts to a rotation of bits.
  610. */
  611. Py_hash_t
  612. _Py_HashDouble(double v)
  613. {
  614. int e, sign;
  615. double m;
  616. Py_uhash_t x, y;
  617. if (!Py_IS_FINITE(v)) {
  618. if (Py_IS_INFINITY(v))
  619. return v > 0 ? _PyHASH_INF : -_PyHASH_INF;
  620. else
  621. return _PyHASH_NAN;
  622. }
  623. m = frexp(v, &e);
  624. sign = 1;
  625. if (m < 0) {
  626. sign = -1;
  627. m = -m;
  628. }
  629. /* process 28 bits at a time; this should work well both for binary
  630. and hexadecimal floating point. */
  631. x = 0;
  632. while (m) {
  633. x = ((x << 28) & _PyHASH_MODULUS) | x >> (_PyHASH_BITS - 28);
  634. m *= 268435456.0; /* 2**28 */
  635. e -= 28;
  636. y = (Py_uhash_t)m; /* pull out integer part */
  637. m -= y;
  638. x += y;
  639. if (x >= _PyHASH_MODULUS)
  640. x -= _PyHASH_MODULUS;
  641. }
  642. /* adjust for the exponent; first reduce it modulo _PyHASH_BITS */
  643. e = e >= 0 ? e % _PyHASH_BITS : _PyHASH_BITS-1-((-1-e) % _PyHASH_BITS);
  644. x = ((x << e) & _PyHASH_MODULUS) | x >> (_PyHASH_BITS - e);
  645. x = x * sign;
  646. if (x == (Py_uhash_t)-1)
  647. x = (Py_uhash_t)-2;
  648. return (Py_hash_t)x;
  649. }
  650. Py_hash_t
  651. _Py_HashPointer(void *p)
  652. {
  653. Py_hash_t x;
  654. size_t y = (size_t)p;
  655. /* bottom 3 or 4 bits are likely to be 0; rotate y by 4 to avoid
  656. excessive hash collisions for dicts and sets */
  657. y = (y >> 4) | (y << (8 * SIZEOF_VOID_P - 4));
  658. x = (Py_hash_t)y;
  659. if (x == -1)
  660. x = -2;
  661. return x;
  662. }
  663. Py_hash_t
  664. PyObject_HashNotImplemented(PyObject *v)
  665. {
  666. PyErr_Format(PyExc_TypeError, "unhashable type: '%.200s'",
  667. Py_TYPE(v)->tp_name);
  668. return -1;
  669. }
  670. Py_hash_t
  671. PyObject_Hash(PyObject *v)
  672. {
  673. PyTypeObject *tp = Py_TYPE(v);
  674. if (tp->tp_hash != NULL)
  675. return (*tp->tp_hash)(v);
  676. /* To keep to the general practice that inheriting
  677. * solely from object in C code should work without
  678. * an explicit call to PyType_Ready, we implicitly call
  679. * PyType_Ready here and then check the tp_hash slot again
  680. */
  681. if (tp->tp_dict == NULL) {
  682. if (PyType_Ready(tp) < 0)
  683. return -1;
  684. if (tp->tp_hash != NULL)
  685. return (*tp->tp_hash)(v);
  686. }
  687. /* Otherwise, the object can't be hashed */
  688. return PyObject_HashNotImplemented(v);
  689. }
  690. PyObject *
  691. PyObject_GetAttrString(PyObject *v, const char *name)
  692. {
  693. PyObject *w, *res;
  694. if (Py_TYPE(v)->tp_getattr != NULL)
  695. return (*Py_TYPE(v)->tp_getattr)(v, (char*)name);
  696. w = PyUnicode_InternFromString(name);
  697. if (w == NULL)
  698. return NULL;
  699. res = PyObject_GetAttr(v, w);
  700. Py_XDECREF(w);
  701. return res;
  702. }
  703. int
  704. PyObject_HasAttrString(PyObject *v, const char *name)
  705. {
  706. PyObject *res = PyObject_GetAttrString(v, name);
  707. if (res != NULL) {
  708. Py_DECREF(res);
  709. return 1;
  710. }
  711. PyErr_Clear();
  712. return 0;
  713. }
  714. int
  715. PyObject_SetAttrString(PyObject *v, const char *name, PyObject *w)
  716. {
  717. PyObject *s;
  718. int res;
  719. if (Py_TYPE(v)->tp_setattr != NULL)
  720. return (*Py_TYPE(v)->tp_setattr)(v, (char*)name, w);
  721. s = PyUnicode_InternFromString(name);
  722. if (s == NULL)
  723. return -1;
  724. res = PyObject_SetAttr(v, s, w);
  725. Py_XDECREF(s);
  726. return res;
  727. }
  728. PyObject *
  729. PyObject_GetAttr(PyObject *v, PyObject *name)
  730. {
  731. PyTypeObject *tp = Py_TYPE(v);
  732. if (!PyUnicode_Check(name)) {
  733. PyErr_Format(PyExc_TypeError,
  734. "attribute name must be string, not '%.200s'",
  735. name->ob_type->tp_name);
  736. return NULL;
  737. }
  738. if (tp->tp_getattro != NULL)
  739. return (*tp->tp_getattro)(v, name);
  740. if (tp->tp_getattr != NULL) {
  741. char *name_str = _PyUnicode_AsString(name);
  742. if (name_str == NULL)
  743. return NULL;
  744. return (*tp->tp_getattr)(v, name_str);
  745. }
  746. PyErr_Format(PyExc_AttributeError,
  747. "'%.50s' object has no attribute '%U'",
  748. tp->tp_name, name);
  749. return NULL;
  750. }
  751. int
  752. PyObject_HasAttr(PyObject *v, PyObject *name)
  753. {
  754. PyObject *res = PyObject_GetAttr(v, name);
  755. if (res != NULL) {
  756. Py_DECREF(res);
  757. return 1;
  758. }
  759. PyErr_Clear();
  760. return 0;
  761. }
  762. int
  763. PyObject_SetAttr(PyObject *v, PyObject *name, PyObject *value)
  764. {
  765. PyTypeObject *tp = Py_TYPE(v);
  766. int err;
  767. if (!PyUnicode_Check(name)) {
  768. PyErr_Format(PyExc_TypeError,
  769. "attribute name must be string, not '%.200s'",
  770. name->ob_type->tp_name);
  771. return -1;
  772. }
  773. Py_INCREF(name);
  774. PyUnicode_InternInPlace(&name);
  775. if (tp->tp_setattro != NULL) {
  776. err = (*tp->tp_setattro)(v, name, value);
  777. Py_DECREF(name);
  778. return err;
  779. }
  780. if (tp->tp_setattr != NULL) {
  781. char *name_str = _PyUnicode_AsString(name);
  782. if (name_str == NULL)
  783. return -1;
  784. err = (*tp->tp_setattr)(v, name_str, value);
  785. Py_DECREF(name);
  786. return err;
  787. }
  788. Py_DECREF(name);
  789. assert(name->ob_refcnt >= 1);
  790. if (tp->tp_getattr == NULL && tp->tp_getattro == NULL)
  791. PyErr_Format(PyExc_TypeError,
  792. "'%.100s' object has no attributes "
  793. "(%s .%U)",
  794. tp->tp_name,
  795. value==NULL ? "del" : "assign to",
  796. name);
  797. else
  798. PyErr_Format(PyExc_TypeError,
  799. "'%.100s' object has only read-only attributes "
  800. "(%s .%U)",
  801. tp->tp_name,
  802. value==NULL ? "del" : "assign to",
  803. name);
  804. return -1;
  805. }
  806. /* Helper to get a pointer to an object's __dict__ slot, if any */
  807. PyObject **
  808. _PyObject_GetDictPtr(PyObject *obj)
  809. {
  810. Py_ssize_t dictoffset;
  811. PyTypeObject *tp = Py_TYPE(obj);
  812. dictoffset = tp->tp_dictoffset;
  813. if (dictoffset == 0)
  814. return NULL;
  815. if (dictoffset < 0) {
  816. Py_ssize_t tsize;
  817. size_t size;
  818. tsize = ((PyVarObject *)obj)->ob_size;
  819. if (tsize < 0)
  820. tsize = -tsize;
  821. size = _PyObject_VAR_SIZE(tp, tsize);
  822. dictoffset += (long)size;
  823. assert(dictoffset > 0);
  824. assert(dictoffset % SIZEOF_VOID_P == 0);
  825. }
  826. return (PyObject **) ((char *)obj + dictoffset);
  827. }
  828. PyObject *
  829. PyObject_SelfIter(PyObject *obj)
  830. {
  831. Py_INCREF(obj);
  832. return obj;
  833. }
  834. /* Helper used when the __next__ method is removed from a type:
  835. tp_iternext is never NULL and can be safely called without checking
  836. on every iteration.
  837. */
  838. PyObject *
  839. _PyObject_NextNotImplemented(PyObject *self)
  840. {
  841. PyErr_Format(PyExc_TypeError,
  842. "'%.200s' object is not iterable",
  843. Py_TYPE(self)->tp_name);
  844. return NULL;
  845. }
  846. /* Generic GetAttr functions - put these in your tp_[gs]etattro slot */
  847. PyObject *
  848. _PyObject_GenericGetAttrWithDict(PyObject *obj, PyObject *name, PyObject *dict)
  849. {
  850. PyTypeObject *tp = Py_TYPE(obj);
  851. PyObject *descr = NULL;
  852. PyObject *res = NULL;
  853. descrgetfunc f;
  854. Py_ssize_t dictoffset;
  855. PyObject **dictptr;
  856. if (!PyUnicode_Check(name)){
  857. PyErr_Format(PyExc_TypeError,
  858. "attribute name must be string, not '%.200s'",
  859. name->ob_type->tp_name);
  860. return NULL;
  861. }
  862. else
  863. Py_INCREF(name);
  864. if (tp->tp_dict == NULL) {
  865. if (PyType_Ready(tp) < 0)
  866. goto done;
  867. }
  868. descr = _PyType_Lookup(tp, name);
  869. Py_XINCREF(descr);
  870. f = NULL;
  871. if (descr != NULL) {
  872. f = descr->ob_type->tp_descr_get;
  873. if (f != NULL && PyDescr_IsData(descr)) {
  874. res = f(descr, obj, (PyObject *)obj->ob_type);
  875. Py_DECREF(descr);
  876. goto done;
  877. }
  878. }
  879. if (dict == NULL) {
  880. /* Inline _PyObject_GetDictPtr */
  881. dictoffset = tp->tp_dictoffset;
  882. if (dictoffset != 0) {
  883. if (dictoffset < 0) {
  884. Py_ssize_t tsize;
  885. size_t size;
  886. tsize = ((PyVarObject *)obj)->ob_size;
  887. if (tsize < 0)
  888. tsize = -tsize;
  889. size = _PyObject_VAR_SIZE(tp, tsize);
  890. dictoffset += (long)size;
  891. assert(dictoffset > 0);
  892. assert(dictoffset % SIZEOF_VOID_P == 0);
  893. }
  894. dictptr = (PyObject **) ((char *)obj + dictoffset);
  895. dict = *dictptr;
  896. }
  897. }
  898. if (dict != NULL) {
  899. Py_INCREF(dict);
  900. res = PyDict_GetItem(dict, name);
  901. if (res != NULL) {
  902. Py_INCREF(res);
  903. Py_XDECREF(descr);
  904. Py_DECREF(dict);
  905. goto done;
  906. }
  907. Py_DECREF(dict);
  908. }
  909. if (f != NULL) {
  910. res = f(descr, obj, (PyObject *)Py_TYPE(obj));
  911. Py_DECREF(descr);
  912. goto done;
  913. }
  914. if (descr != NULL) {
  915. res = descr;
  916. /* descr was already increfed above */
  917. goto done;
  918. }
  919. PyErr_Format(PyExc_AttributeError,
  920. "'%.50s' object has no attribute '%U'",
  921. tp->tp_name, name);
  922. done:
  923. Py_DECREF(name);
  924. return res;
  925. }
  926. PyObject *
  927. PyObject_GenericGetAttr(PyObject *obj, PyObject *name)
  928. {
  929. return _PyObject_GenericGetAttrWithDict(obj, name, NULL);
  930. }
  931. int
  932. _PyObject_GenericSetAttrWithDict(PyObject *obj, PyObject *name,
  933. PyObject *value, PyObject *dict)
  934. {
  935. PyTypeObject *tp = Py_TYPE(obj);
  936. PyObject *descr;
  937. descrsetfunc f;
  938. PyObject **dictptr;
  939. int res = -1;
  940. if (!PyUnicode_Check(name)){
  941. PyErr_Format(PyExc_TypeError,
  942. "attribute name must be string, not '%.200s'",
  943. name->ob_type->tp_name);
  944. return -1;
  945. }
  946. else
  947. Py_INCREF(name);
  948. if (tp->tp_dict == NULL) {
  949. if (PyType_Ready(tp) < 0)
  950. goto done;
  951. }
  952. descr = _PyType_Lookup(tp, name);
  953. f = NULL;
  954. if (descr != NULL) {
  955. f = descr->ob_type->tp_descr_set;
  956. if (f != NULL && PyDescr_IsData(descr)) {
  957. res = f(descr, obj, value);
  958. goto done;
  959. }
  960. }
  961. if (dict == NULL) {
  962. dictptr = _PyObject_GetDictPtr(obj);
  963. if (dictptr != NULL) {
  964. dict = *dictptr;
  965. if (dict == NULL && value != NULL) {
  966. dict = PyDict_New();
  967. if (dict == NULL)
  968. goto done;
  969. *dictptr = dict;
  970. }
  971. }
  972. }
  973. if (dict != NULL) {
  974. Py_INCREF(dict);
  975. if (value == NULL)
  976. res = PyDict_DelItem(dict, name);
  977. else
  978. res = PyDict_SetItem(dict, name, value);
  979. if (res < 0 && PyErr_ExceptionMatches(PyExc_KeyError))
  980. PyErr_SetObject(PyExc_AttributeError, name);
  981. Py_DECREF(dict);
  982. goto done;
  983. }
  984. if (f != NULL) {
  985. res = f(descr, obj, value);
  986. goto done;
  987. }
  988. if (descr == NULL) {
  989. PyErr_Format(PyExc_AttributeError,
  990. "'%.100s' object has no attribute '%U'",
  991. tp->tp_name, name);
  992. goto done;
  993. }
  994. PyErr_Format(PyExc_AttributeError,
  995. "'%.50s' object attribute '%U' is read-only",
  996. tp->tp_name, name);
  997. done:
  998. Py_DECREF(name);
  999. return res;
  1000. }
  1001. int
  1002. PyObject_GenericSetAttr(PyObject *obj, PyObject *name, PyObject *value)
  1003. {
  1004. return _PyObject_GenericSetAttrWithDict(obj, name, value, NULL);
  1005. }
  1006. /* Test a value used as condition, e.g., in a for or if statement.
  1007. Return -1 if an error occurred */
  1008. int
  1009. PyObject_IsTrue(PyObject *v)
  1010. {
  1011. Py_ssize_t res;
  1012. if (v == Py_True)
  1013. return 1;
  1014. if (v == Py_False)
  1015. return 0;
  1016. if (v == Py_None)
  1017. return 0;
  1018. else if (v->ob_type->tp_as_number != NULL &&
  1019. v->ob_type->tp_as_number->nb_bool != NULL)
  1020. res = (*v->ob_type->tp_as_number->nb_bool)(v);
  1021. else if (v->ob_type->tp_as_mapping != NULL &&
  1022. v->ob_type->tp_as_mapping->mp_length != NULL)
  1023. res = (*v->ob_type->tp_as_mapping->mp_length)(v);
  1024. else if (v->ob_type->tp_as_sequence != NULL &&
  1025. v->ob_type->tp_as_sequence->sq_length != NULL)
  1026. res = (*v->ob_type->tp_as_sequence->sq_length)(v);
  1027. else
  1028. return 1;
  1029. /* if it is negative, it should be either -1 or -2 */
  1030. return (res > 0) ? 1 : Py_SAFE_DOWNCAST(res, Py_ssize_t, int);
  1031. }
  1032. /* equivalent of 'not v'
  1033. Return -1 if an error occurred */
  1034. int
  1035. PyObject_Not(PyObject *v)
  1036. {
  1037. int res;
  1038. res = PyObject_IsTrue(v);
  1039. if (res < 0)
  1040. return res;
  1041. return res == 0;
  1042. }
  1043. /* Test whether an object can be called */
  1044. int
  1045. PyCallable_Check(PyObject *x)
  1046. {
  1047. if (x == NULL)
  1048. return 0;
  1049. return x->ob_type->tp_call != NULL;
  1050. }
  1051. /* ------------------------- PyObject_Dir() helpers ------------------------- */
  1052. /* Helper for PyObject_Dir.
  1053. Merge the __dict__ of aclass into dict, and recursively also all
  1054. the __dict__s of aclass's base classes. The order of merging isn't
  1055. defined, as it's expected that only the final set of dict keys is
  1056. interesting.
  1057. Return 0 on success, -1 on error.
  1058. */
  1059. static int
  1060. merge_class_dict(PyObject* dict, PyObject* aclass)
  1061. {
  1062. PyObject *classdict;
  1063. PyObject *bases;
  1064. assert(PyDict_Check(dict));
  1065. assert(aclass);
  1066. /* Merge in the type's dict (if any). */
  1067. classdict = PyObject_GetAttrString(aclass, "__dict__");
  1068. if (classdict == NULL)
  1069. PyErr_Clear();
  1070. else {
  1071. int status = PyDict_Update(dict, classdict);
  1072. Py_DECREF(classdict);
  1073. if (status < 0)
  1074. return -1;
  1075. }
  1076. /* Recursively merge in the base types' (if any) dicts. */
  1077. bases = PyObject_GetAttrString(aclass, "__bases__");
  1078. if (bases == NULL)
  1079. PyErr_Clear();
  1080. else {
  1081. /* We have no guarantee that bases is a real tuple */
  1082. Py_ssize_t i, n;
  1083. n = PySequence_Size(bases); /* This better be right */
  1084. if (n < 0)
  1085. PyErr_Clear();
  1086. else {
  1087. for (i = 0; i < n; i++) {
  1088. int status;
  1089. PyObject *base = PySequence_GetItem(bases, i);
  1090. if (base == NULL) {
  1091. Py_DECREF(bases);
  1092. return -1;
  1093. }
  1094. status = merge_class_dict(dict, base);
  1095. Py_DECREF(base);
  1096. if (status < 0) {
  1097. Py_DECREF(bases);
  1098. return -1;
  1099. }
  1100. }
  1101. }
  1102. Py_DECREF(bases);
  1103. }
  1104. return 0;
  1105. }
  1106. /* Helper for PyObject_Dir without arguments: returns the local scope. */
  1107. static PyObject *
  1108. _dir_locals(void)
  1109. {
  1110. PyObject *names;
  1111. PyObject *locals = PyEval_GetLocals();
  1112. if (locals == NULL) {
  1113. PyErr_SetString(PyExc_SystemError, "frame does not exist");
  1114. return NULL;
  1115. }
  1116. names = PyMapping_Keys(locals);
  1117. if (!names)
  1118. return NULL;
  1119. if (!PyList_Check(names)) {
  1120. PyErr_Format(PyExc_TypeError,
  1121. "dir(): expected keys() of locals to be a list, "
  1122. "not '%.200s'", Py_TYPE(names)->tp_name);
  1123. Py_DECREF(names);
  1124. return NULL;
  1125. }
  1126. /* the locals don't need to be DECREF'd */
  1127. return names;
  1128. }
  1129. /* Helper for PyObject_Dir of type objects: returns __dict__ and __bases__.
  1130. We deliberately don't suck up its __class__, as methods belonging to the
  1131. metaclass would probably be more confusing than helpful.
  1132. */
  1133. static PyObject *
  1134. _specialized_dir_type(PyObject *obj)
  1135. {
  1136. PyObject *result = NULL;
  1137. PyObject *dict = PyDict_New();
  1138. if (dict != NULL && merge_class_dict(dict, obj) == 0)
  1139. result = PyDict_Keys(dict);
  1140. Py_XDECREF(dict);
  1141. return result;
  1142. }
  1143. /* Helper for PyObject_Dir of module objects: returns the module's __dict__. */
  1144. static PyObject *
  1145. _specialized_dir_module(PyObject *obj)
  1146. {
  1147. PyObject *result = NULL;
  1148. PyObject *dict = PyObject_GetAttrString(obj, "__dict__");
  1149. if (dict != NULL) {
  1150. if (PyDict_Check(dict))
  1151. result = PyDict_Keys(dict);
  1152. else {
  1153. const char *name = PyModule_GetName(obj);
  1154. if (name)
  1155. PyErr_Format(PyExc_TypeError,
  1156. "%.200s.__dict__ is not a dictionary",
  1157. name);
  1158. }
  1159. }
  1160. Py_XDECREF(dict);
  1161. return result;
  1162. }
  1163. /* Helper for PyObject_Dir of generic objects: returns __dict__, __class__,
  1164. and recursively up the __class__.__bases__ chain.
  1165. */
  1166. static PyObject *
  1167. _generic_dir(PyObject *obj)
  1168. {
  1169. PyObject *result = NULL;
  1170. PyObject *dict = NULL;
  1171. PyObject *itsclass = NULL;
  1172. /* Get __dict__ (which may or may not be a real dict...) */
  1173. dict = PyObject_GetAttrString(obj, "__dict__");
  1174. if (dict == NULL) {
  1175. PyErr_Clear();
  1176. dict = PyDict_New();
  1177. }
  1178. else if (!PyDict_Check(dict)) {
  1179. Py_DECREF(dict);
  1180. dict = PyDict_New();
  1181. }
  1182. else {
  1183. /* Copy __dict__ to avoid mutating it. */
  1184. PyObject *temp = PyDict_Copy(dict);
  1185. Py_DECREF(dict);
  1186. dict = temp;
  1187. }
  1188. if (dict == NULL)
  1189. goto error;
  1190. /* Merge in attrs reachable from its class. */
  1191. itsclass = PyObject_GetAttrString(obj, "__class__");
  1192. if (itsclass == NULL)
  1193. /* XXX(tomer): Perhaps fall back to obj->ob_type if no
  1194. __class__ exists? */
  1195. PyErr_Clear();
  1196. else {
  1197. if (merge_class_dict(dict, itsclass) != 0)
  1198. goto error;
  1199. }
  1200. result = PyDict_Keys(dict);
  1201. /* fall through */
  1202. error:
  1203. Py_XDECREF(itsclass);
  1204. Py_XDECREF(dict);
  1205. return result;
  1206. }
  1207. /* Helper for PyObject_Dir: object introspection.
  1208. This calls one of the above specialized versions if no __dir__ method
  1209. exists. */
  1210. static PyObject *
  1211. _dir_object(PyObject *obj)
  1212. {
  1213. PyObject *result = NULL;
  1214. static PyObject *dir_str = NULL;
  1215. PyObject *dirfunc = _PyObject_LookupSpecial(obj, "__dir__", &dir_str);
  1216. assert(obj);
  1217. if (dirfunc == NULL) {
  1218. if (PyErr_Occurred())
  1219. return NULL;
  1220. /* use default implementation */
  1221. if (PyModule_Check(obj))
  1222. result = _specialized_dir_module(obj);
  1223. else if (PyType_Check(obj))
  1224. result = _specialized_dir_type(obj);
  1225. else
  1226. result = _generic_dir(obj);
  1227. }
  1228. else {
  1229. /* use __dir__ */
  1230. result = PyObject_CallFunctionObjArgs(dirfunc, NULL);
  1231. Py_DECREF(dirfunc);
  1232. if (result == NULL)
  1233. return NULL;
  1234. /* result must be a list */
  1235. /* XXX(gbrandl): could also check if all items are strings */
  1236. if (!PyList_Check(result)) {
  1237. PyErr_Format(PyExc_TypeError,
  1238. "__dir__() must return a list, not %.200s",
  1239. Py_TYPE(result)->tp_name);
  1240. Py_DECREF(result);
  1241. result = NULL;
  1242. }
  1243. }
  1244. return result;
  1245. }
  1246. /* Implementation of dir() -- if obj is NULL, returns the names in the current
  1247. (local) scope. Otherwise, performs introspection of the object: returns a
  1248. sorted list of attribute names (supposedly) accessible from the object
  1249. */
  1250. PyObject *
  1251. PyObject_Dir(PyObject *obj)
  1252. {
  1253. PyObject * result;
  1254. if (obj == NULL)
  1255. /* no object -- introspect the locals */
  1256. result = _dir_locals();
  1257. else
  1258. /* object -- introspect the object */
  1259. result = _dir_object(obj);
  1260. assert(result == NULL || PyList_Check(result));
  1261. if (result != NULL && PyList_Sort(result) != 0) {
  1262. /* sorting the list failed */
  1263. Py_DECREF(result);
  1264. result = NULL;
  1265. }
  1266. return result;
  1267. }
  1268. /*
  1269. NoObject is usable as a non-NULL undefined value, used by the macro None.
  1270. There is (and should be!) no way to create other objects of this type,
  1271. so there is exactly one (which is indestructible, by the way).
  1272. (XXX This type and the type of NotImplemented below should be unified.)
  1273. */
  1274. /* ARGSUSED */
  1275. static PyObject *
  1276. none_repr(PyObject *op)
  1277. {
  1278. return PyUnicode_FromString("None");
  1279. }
  1280. /* ARGUSED */
  1281. static void
  1282. none_dealloc(PyObject* ignore)
  1283. {
  1284. /* This should never get called, but we also don't want to SEGV if
  1285. * we accidentally decref None out of existence.
  1286. */
  1287. Py_FatalError("deallocating None");
  1288. }
  1289. static PyTypeObject PyNone_Type = {
  1290. PyVarObject_HEAD_INIT(&PyType_Type, 0)
  1291. "NoneType",
  1292. 0,
  1293. 0,
  1294. none_dealloc, /*tp_dealloc*/ /*never called*/
  1295. 0, /*tp_print*/
  1296. 0, /*tp_getattr*/
  1297. 0, /*tp_setattr*/
  1298. 0, /*tp_reserved*/
  1299. none_repr, /*tp_repr*/
  1300. 0, /*tp_as_number*/
  1301. 0, /*tp_as_sequence*/
  1302. 0, /*tp_as_mapping*/
  1303. 0, /*tp_hash */
  1304. };
  1305. PyObject _Py_NoneStruct = {
  1306. _PyObject_EXTRA_INIT
  1307. 1, &PyNone_Type
  1308. };
  1309. /* NotImplemented is an object that can be used to signal that an
  1310. operation is not implemented for the given type combination. */
  1311. static PyObject *
  1312. NotImplemented_repr(PyObject *op)
  1313. {
  1314. return PyUnicode_FromString("NotImplemented");
  1315. }
  1316. static PyTypeObject PyNotImplemented_Type = {
  1317. PyVarObject_HEAD_INIT(&PyType_Type, 0)
  1318. "NotImplementedType",
  1319. 0,
  1320. 0,
  1321. none_dealloc, /*tp_dealloc*/ /*never called*/
  1322. 0, /*tp_print*/
  1323. 0, /*tp_getattr*/
  1324. 0, /*tp_setattr*/
  1325. 0, /*tp_reserved*/
  1326. NotImplemented_repr, /*tp_repr*/
  1327. 0, /*tp_as_number*/
  1328. 0, /*tp_as_sequence*/
  1329. 0, /*tp_as_mapping*/
  1330. 0, /*tp_hash */
  1331. };
  1332. PyObject _Py_NotImplementedStruct = {
  1333. _PyObject_EXTRA_INIT
  1334. 1, &PyNotImplemented_Type
  1335. };
  1336. void
  1337. _Py_ReadyTypes(void)
  1338. {
  1339. if (PyType_Ready(&PyType_Type) < 0)
  1340. Py_FatalError("Can't initialize type type");
  1341. if (PyType_Ready(&_PyWeakref_RefType) < 0)
  1342. Py_FatalError("Can't initialize weakref type");
  1343. if (PyType_Ready(&_PyWeakref_CallableProxyType) < 0)
  1344. Py_FatalError("Can't initialize callable weakref proxy type");
  1345. if (PyType_Ready(&_PyWeakref_ProxyType) < 0)
  1346. Py_FatalError("Can't initialize weakref proxy type");
  1347. if (PyType_Ready(&PyBool_Type) < 0)
  1348. Py_FatalError("Can't initialize bool type");
  1349. if (PyType_Ready(&PyByteArray_Type) < 0)
  1350. Py_FatalError("Can't initialize bytearray type");
  1351. if (PyType_Ready(&PyBytes_Type) < 0)
  1352. Py_FatalError("Can't initialize 'str'");
  1353. if (PyType_Ready(&PyList_Type) < 0)
  1354. Py_FatalError("Can't initialize list type");
  1355. if (PyType_Ready(&PyNone_Type) < 0)
  1356. Py_FatalError("Can't initialize None type");
  1357. if (PyType_Ready(Py_Ellipsis->ob_type) < 0)
  1358. Py_FatalError("Can't initialize type(Ellipsis)");
  1359. if (PyType_Ready(&PyNotImplemented_Type) < 0)
  1360. Py_FatalError("Can't initialize NotImplemented type");
  1361. if (PyType_Ready(&PyTraceBack_Type) < 0)
  1362. Py_FatalError("Can't initialize traceback type");
  1363. if (PyType_Ready(&PySuper_Type) < 0)
  1364. Py_FatalError("Can't initialize super type");
  1365. if (PyType_Ready(&PyBaseObject_Type) < 0)
  1366. Py_FatalError("Can't initialize object type");
  1367. if (PyType_Ready(&PyRange_Type) < 0)
  1368. Py_FatalError("Can't initialize range type");
  1369. if (PyType_Ready(&PyDict_Type) < 0)
  1370. Py_FatalError("Can't initialize dict type");
  1371. if (PyType_Ready(&PySet_Type) < 0)
  1372. Py_FatalError("Can't initialize set type");
  1373. if (PyType_Ready(&PyUnicode_Type) < 0)
  1374. Py_FatalError("Can't initialize str type");
  1375. if (PyType_Ready(&PySlice_Type) < 0)
  1376. Py_FatalError("Can't initialize slice type");
  1377. if (PyType_Ready(&PyStaticMethod_Type) < 0)
  1378. Py_FatalError("Can't initialize static method type");
  1379. if (PyType_Ready(&PyComplex_Type) < 0)
  1380. Py_FatalError("Can't initialize complex type");
  1381. if (PyType_Ready(&PyFloat_Type) < 0)
  1382. Py_FatalError("Can't initialize float type");
  1383. if (PyType_Ready(&PyLong_Type) < 0)
  1384. Py_FatalError("Can't initialize int type");
  1385. if (PyType_Ready(&PyFrozenSet_Type) < 0)
  1386. Py_FatalError("Can't initialize frozenset type");
  1387. if (PyType_Ready(&PyProperty_Type) < 0)
  1388. Py_FatalError("Can't initialize property type");
  1389. if (PyType_Ready(&PyMemoryView_Type) < 0)
  1390. Py_FatalError("Can't initialize memoryview type");
  1391. if (PyType_Ready(&PyTuple_Type) < 0)
  1392. Py_FatalError("Can't initialize tuple type");
  1393. if (PyType_Ready(&PyEnum_Type) < 0)
  1394. Py_FatalError("Can't initialize enumerate type");
  1395. if (PyType_Ready(&PyReversed_Type) < 0)
  1396. Py_FatalError("Can't initialize reversed type");
  1397. if (PyType_Ready(&PyStdPrinter_Type) < 0)
  1398. Py_FatalError("Can't initialize StdPrinter");
  1399. if (PyType_Ready(&PyCode_Type) < 0)
  1400. Py_FatalError("Can't initialize code type");
  1401. if (PyType_Ready(&PyFrame_Type) < 0)
  1402. Py_FatalError("Can't initialize frame type");
  1403. if (PyType_Ready(&PyCFunction_Type) < 0)
  1404. Py_FatalError("Can't initialize builtin function type");
  1405. if (PyType_Ready(&PyMethod_Type) < 0)
  1406. Py_FatalError("Can't initialize method type");
  1407. if (PyType_Ready(&PyFunction_Type) < 0)
  1408. Py_FatalError("Can't initialize function type");
  1409. if (PyType_Ready(&PyDictProxy_Type) < 0)
  1410. Py_FatalError("Can't initialize dict proxy type");
  1411. if (PyType_Ready(&PyGen_Type) < 0)
  1412. Py_FatalError("Can't initialize generator type");
  1413. if (PyType_Ready(&PyGetSetDescr_Type) < 0)
  1414. Py_FatalError("Can't initialize get-set descriptor type");
  1415. if (PyType_Ready(&PyWrapperDescr_Type) < 0)
  1416. Py_FatalError("Can't initialize wrapper type");
  1417. if (PyType_Ready(&PyEllipsis_Type) < 0)
  1418. Py_FatalError("Can't initialize ellipsis type");
  1419. if (PyType_Ready(&PyMemberDescr_Type) < 0)
  1420. Py_FatalError("Can't initialize member descriptor type");
  1421. if (PyType_Ready(&PyFilter_Type) < 0)
  1422. Py_FatalError("Can't initialize filter type");
  1423. if (PyType_Ready(&PyMap_Type) < 0)
  1424. Py_FatalError("Can't initialize map type");
  1425. if (PyType_Ready(&PyZip_Type) < 0)
  1426. Py_FatalError("Can't initialize zip type");
  1427. }
  1428. #ifdef Py_TRACE_REFS
  1429. void
  1430. _Py_NewReference(PyObject *op)
  1431. {
  1432. _Py_INC_REFTOTAL;
  1433. op->ob_refcnt = 1;
  1434. _Py_AddToAllObjects(op, 1);
  1435. _Py_INC_TPALLOCS(op);
  1436. }
  1437. void
  1438. _Py_ForgetReference(register PyObject *op)
  1439. {
  1440. #ifdef SLOW_UNREF_CHECK
  1441. register PyObject *p;
  1442. #endif
  1443. if (op->ob_refcnt < 0)
  1444. Py_FatalError("UNREF negative refcnt");
  1445. if (op == &refchain ||
  1446. op->_ob_prev->_ob_next != op || op->_ob_next->_ob_prev != op) {
  1447. fprintf(stderr, "* ob\n");
  1448. _PyObject_Dump(op);
  1449. fprintf(stderr, "* op->_ob_prev->_ob_next\n");
  1450. _PyObject_Dump(op->_ob_prev->_ob_next);
  1451. fprintf(stderr, "* op->_ob_next->_ob_prev\n");
  1452. _PyObject_Dump(op->_ob_next->_ob_prev);
  1453. Py_FatalError("UNREF invalid object");
  1454. }
  1455. #ifdef SLOW_UNREF_CHECK
  1456. for (p = refchain._ob_next; p != &refchain; p = p->_ob_next) {
  1457. if (p == op)
  1458. break;
  1459. }
  1460. if (p == &refchain) /* Not found */
  1461. Py_FatalError("UNREF unknown object");
  1462. #endif
  1463. op->_ob_next->_ob_prev = op->_ob_prev;
  1464. op->_ob_prev->_ob_next = op->_ob_next;
  1465. op->_ob_next = op->_ob_prev = NULL;
  1466. _Py_INC_TPFREES(op);
  1467. }
  1468. void
  1469. _Py_Dealloc(PyObject *op)
  1470. {
  1471. destructor dealloc = Py_TYPE(op)->tp_dealloc;
  1472. _Py_ForgetReference(op);
  1473. (*dealloc)(op);
  1474. }
  1475. /* Print all live objects. Because PyObject_Print is called, the
  1476. * interpreter must be in a healthy state.
  1477. */
  1478. void
  1479. _Py_PrintReferences(FILE *fp)
  1480. {
  1481. PyObject *op;
  1482. fprintf(fp, "Remaining objects:\n");
  1483. for (op = refchain._ob_next; op != &refchain; op = op->_ob_next) {
  1484. fprintf(fp, "%p [%" PY_FORMAT_SIZE_T "d] ", op, op->ob_refcnt);
  1485. if (PyObject_Print(op, fp, 0) != 0)
  1486. PyErr_Clear();
  1487. putc('\n', fp);
  1488. }
  1489. }
  1490. /* Print the addresses of all live objects. Unlike _Py_PrintReferences, this
  1491. * doesn't make any calls to the Python C API, so is always safe to call.
  1492. */
  1493. void
  1494. _Py_PrintReferenceAddresses(FILE *fp)
  1495. {
  1496. PyObject *op;
  1497. fprintf(fp, "Remaining object addresses:\n");
  1498. for (op = refchain._ob_next; op != &refchain; op = op->_ob_next)
  1499. fprintf(fp, "%p [%" PY_FORMAT_SIZE_T "d] %s\n", op,
  1500. op->ob_refcnt, Py_TYPE(op)->tp_name);
  1501. }
  1502. PyObject *
  1503. _Py_GetObjects(PyObject *self, PyObject *args)
  1504. {
  1505. int i, n;
  1506. PyObject *t = NULL;
  1507. PyObject *res, *op;
  1508. if (!PyArg_ParseTuple(args, "i|O", &n, &t))
  1509. return NULL;
  1510. op = refchain._ob_next;
  1511. res = PyList_New(0);
  1512. if (res == NULL)
  1513. return NULL;
  1514. for (i = 0; (n == 0 || i < n) && op != &refchain; i++) {
  1515. while (op == self || op == args || op == res || op == t ||
  1516. (t != NULL && Py_TYPE(op) != (PyTypeObject *) t)) {
  1517. op = op->_ob_next;
  1518. if (op == &refchain)
  1519. return res;
  1520. }
  1521. if (PyList_Append(res, op) < 0) {
  1522. Py_DECREF(res);
  1523. return NULL;
  1524. }
  1525. op = op->_ob_next;
  1526. }
  1527. return res;
  1528. }
  1529. #endif
  1530. /* Hack to force loading of pycapsule.o */
  1531. PyTypeObject *_PyCapsule_hack = &PyCapsule_Type;
  1532. /* Hack to force loading of abstract.o */
  1533. Py_ssize_t (*_Py_abstract_hack)(PyObject *) = PyObject_Size;
  1534. /* Python's malloc wrappers (see pymem.h) */
  1535. void *
  1536. PyMem_Malloc(size_t nbytes)
  1537. {
  1538. return PyMem_MALLOC(nbytes);
  1539. }
  1540. void *
  1541. PyMem_Realloc(void *p, size_t nbytes)
  1542. {
  1543. return PyMem_REALLOC(p, nbytes);
  1544. }
  1545. void
  1546. PyMem_Free(void *p)
  1547. {
  1548. PyMem_FREE(p);
  1549. }
  1550. /* These methods are used to control infinite recursion in repr, str, print,
  1551. etc. Container objects that may recursively contain themselves,
  1552. e.g. builtin dictionaries and lists, should used Py_ReprEnter() and
  1553. Py_ReprLeave() to avoid infinite recursion.
  1554. Py_ReprEnter() returns 0 the first time it is called for a particular
  1555. object and 1 every time thereafter. It returns -1 if an exception
  1556. occurred. Py_ReprLeave() has no return value.
  1557. See dictobject.c and listobject.c for examples of use.
  1558. */
  1559. #define KEY "Py_Repr"
  1560. int
  1561. Py_ReprEnter(PyObject *obj)
  1562. {
  1563. PyObject *dict;
  1564. PyObject *list;
  1565. Py_ssize_t i;
  1566. dict = PyThreadState_GetDict();
  1567. if (dict == NULL)
  1568. return 0;
  1569. list = PyDict_GetItemString(dict, KEY);
  1570. if (list == NULL) {
  1571. list = PyList_New(0);
  1572. if (list == NULL)
  1573. return -1;
  1574. if (PyDict_SetItemString(dict, KEY, list) < 0)
  1575. return -1;
  1576. Py_DECREF(list);
  1577. }
  1578. i = PyList_GET_SIZE(list);
  1579. while (--i >= 0) {
  1580. if (PyList_GET_ITEM(list, i) == obj)
  1581. return 1;
  1582. }
  1583. PyList_Append(list, obj);
  1584. return 0;
  1585. }
  1586. void
  1587. Py_ReprLeave(PyObject *obj)
  1588. {
  1589. PyObject *dict;
  1590. PyObject *list;
  1591. Py_ssize_t i;
  1592. dict = PyThreadState_GetDict();
  1593. if (dict == NULL)
  1594. return;
  1595. list = PyDict_GetItemString(dict, KEY);
  1596. if (list == NULL || !PyList_Check(list))
  1597. return;
  1598. i = PyList_GET_SIZE(list);
  1599. /* Count backwards because we always expect obj to be list[-1] */
  1600. while (--i >= 0) {
  1601. if (PyList_GET_ITEM(list, i) == obj) {
  1602. PyList_SetSlice(list, i, i + 1, NULL);
  1603. break;
  1604. }
  1605. }
  1606. }
  1607. /* Trashcan support. */
  1608. /* Current call-stack depth of tp_dealloc calls. */
  1609. int _PyTrash_delete_nesting = 0;
  1610. /* List of objects that still need to be cleaned up, singly linked via their
  1611. * gc headers' gc_prev pointers.
  1612. */
  1613. PyObject *_PyTrash_delete_later = NULL;
  1614. /* Add op to the _PyTrash_delete_later list. Called when the current
  1615. * call-stack depth gets large. op must be a currently untracked gc'ed
  1616. * object, with refcount 0. Py_DECREF must already have been called on it.
  1617. */
  1618. void
  1619. _PyTrash_deposit_object(PyObject *op)
  1620. {
  1621. assert(PyObject_IS_GC(op));
  1622. assert(_Py_AS_GC(op)->gc.gc_refs == _PyGC_REFS_UNTRACKED);
  1623. assert(op->ob_refcnt == 0);
  1624. _Py_AS_GC(op)->gc.gc_prev = (PyGC_Head *)_PyTrash_delete_later;
  1625. _PyTrash_delete_later = op;
  1626. }
  1627. /* Dealloccate all the objects in the _PyTrash_delete_later list. Called when
  1628. * the call-stack unwinds again.
  1629. */
  1630. void
  1631. _PyTrash_destroy_chain(void)
  1632. {
  1633. while (_PyTrash_delete_later) {
  1634. PyObject *op = _PyTrash_delete_later;
  1635. destructor dealloc = Py_TYPE(op)->tp_dealloc;
  1636. _PyTrash_delete_later =
  1637. (PyObject*) _Py_AS_GC(op)->gc.gc_prev;
  1638. /* Call the deallocator directly. This used to try to
  1639. * fool Py_DECREF into calling it indirectly, but
  1640. * Py_DECREF was already called on this object, and in
  1641. * assorted non-release builds calling Py_DECREF again ends
  1642. * up distorting allocation statistics.
  1643. */
  1644. assert(op->ob_refcnt == 0);
  1645. ++_PyTrash_delete_nesting;
  1646. (*dealloc)(op);
  1647. --_PyTrash_delete_nesting;
  1648. }
  1649. }
  1650. #ifndef Py_TRACE_REFS
  1651. /* For Py_LIMITED_API, we need an out-of-line version of _Py_Dealloc.
  1652. Define this here, so we can undefine the macro. */
  1653. #undef _Py_Dealloc
  1654. PyAPI_FUNC(void) _Py_Dealloc(PyObject *);
  1655. void
  1656. _Py_Dealloc(PyObject *op)
  1657. {
  1658. _Py_INC_TPFREES(op) _Py_COUNT_ALLOCS_COMMA
  1659. (*Py_TYPE(op)->tp_dealloc)(op);
  1660. }
  1661. #endif
  1662. #ifdef __cplusplus
  1663. }
  1664. #endif