gh-145876: Do not mask AttributeErrors raised during dictionary unpacking#145906
gh-145876: Do not mask AttributeErrors raised during dictionary unpacking#145906serhiy-storchaka wants to merge 1 commit intopython:mainfrom
Conversation
… unpacking
AttributeErrors raised in keys() or __getitem__() during
dictionary unpacking ({**mymapping} or func(**mymapping)) are
no longer masked by TypeError.
NickCrews
left a comment
There was a problem hiding this comment.
The tests look good to me, modulo my question about checking for error chaining.
I wanted to point out non-atomic way we are checking for the presence of .keys(). But I'm guessing you had reasons you weren't doing something else.
Thanks for your work here!
| "'%.200s' object is not a mapping", | ||
| Py_TYPE(update_o)->tp_name); | ||
| PyObject *exc = _PyErr_GetRaisedException(tstate); | ||
| int has_keys = PyObject_HasAttrWithError(update_o, &_Py_ID(keys)); |
There was a problem hiding this comment.
I suppose a user could be absolutely insane and delete the .keys() method of their object in the middle of throwing their error. Then this check here wouldn't really detect the original cause of the error. Ideally this check was "atomic" with our actual attempt at accessing keys. But this seems like such an edge case that the more performant check-details-only-after-error seems like the right balance of perf vs correctness.
If this API existed, I think it would be better to inspect the exc PyObject to see if it was
- from attempting to access
update_o - using the "keys" attribute name.
instead of this PyObject_HasAttrWithError() check
Or, the other more heavy-handed solution would be to adjust the PyDict_Update(dict_o, update_o); line above to be a more explicit API with a result object such as
UpdateResult update_result;
PyDict_Update(dict_o, update_o, &update_result);
if (update_result.error) ...
if (update_result.error_cause = NO_KEYS) ...
but not sure about the perf, maintenance, etc costs of doing this.
| Py_DECREF(exc); | ||
| } | ||
| else { | ||
| _PyErr_ChainExceptions1(exc); |
There was a problem hiding this comment.
I'm assuming that this C API is the thing responsible for the ...during the above exception...another exception was raised... message. Should the test cases above check that the user sees this error chain? Currently it looks like we only check for the root cause of the error.
| int has_keys = PyObject_HasAttrWithError(update_o, &_Py_ID(keys)); | ||
| if (has_keys == 0) { | ||
| _PyErr_Format(tstate, PyExc_TypeError, | ||
| "'%T' object is not a mapping", |
There was a problem hiding this comment.
Flagging that this is changing the formatting string from '%.200s' to '%T', but I'm assuming that is intentional.
| "Value after ** must be a mapping, not %.200s", | ||
| Py_TYPE(kwargs)->tp_name); | ||
| PyObject *exc = _PyErr_GetRaisedException(tstate); | ||
| int has_keys = PyObject_HasAttrWithError(kwargs, &_Py_ID(keys)); |
There was a problem hiding this comment.
Similar to above, I would love a more watertight/atomic way of checking the real cause of the error.
AttributeErrors raised in
keys()or__getitem__()during dictionary unpacking ({**mymapping}orfunc(**mymapping)) are no longer masked by TypeError..keys()or.__getitem__()during{**mymapping}are incorrectly masked #145876