This PR changes the current JIT model from trace projection to trace recording. Benchmarking: better pyperformance (about 1.7% overall) geomean versus current https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251108-3.15.0a1%2B-7e2bc1d-JIT/bm-20251108-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-7e2bc1d-vs-base.svg, 100% faster Richards on the most improved benchmark versus the current JIT. Slowdown of about 10-15% on the worst benchmark versus the current JIT. **Note: the fastest version isn't the one merged, as it relies on fixing bugs in the specializing interpreter, which is left to another PR**. The speedup in the merged version is about 1.1%. https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251112-3.15.0a1%2B-f8a764a-JIT/bm-20251112-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-f8a764a-vs-base.svg Stats: 50% more uops executed, 30% more traces entered the last time we ran them. It also suggests our trace lengths for a real trace recording JIT are too short, as a lot of trace too long aborts https://github.com/facebookexperimental/free-threading-benchmarking/blob/main/results/bm-20251023-3.15.0a1%2B-eb73378-CLANG%2CJIT/bm-20251023-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-eb73378-pystats-vs-base.md . This new JIT frontend is already able to record/execute significantly more instructions than the previous JIT frontend. In this PR, we are now able to record through custom dunders, simple object creation, generators, etc. None of these were done by the old JIT frontend. Some custom dunders uops were discovered to be broken as part of this work gh-140277 The optimizer stack space check is disabled, as it's no longer valid to deal with underflow. Pros: * Ignoring the generated tracer code as it's automatically created, this is only additional 1k lines of code. The maintenance burden is handled by the DSL and code generator. * `optimizer.c` is now significantly simpler, as we don't have to do strange things to recover the bytecode from a trace. * The new JIT frontend is able to handle a lot more control-flow than the old one. * Tracing is very low overhead. We use the tail calling interpreter/computed goto interpreter to switch between tracing mode and non-tracing mode. I call this mechanism dual dispatch, as we have two dispatch tables dispatching to each other. Specialization is still enabled while tracing. * Better handling of polymorphism. We leverage the specializing interpreter for this. Cons: * (For now) requires tail calling interpreter or computed gotos. This means no Windows JIT for now :(. Not to fret, tail calling is coming soon to Windows though https://github.com/python/cpython/pull/139962 Design: * After each instruction, the `record_previous_inst` function/label is executed. This does as the name suggests. * The tracing interpreter lowers bytecode to uops directly so that it can obtain "fresh" values at the point of lowering. * The tracing version behaves nearly identical to the normal interpreter, in fact it even has specialization! This allows it to run without much of a slowdown when tracing. The actual cost of tracing is only a function call and writes to memory. * The tracing interpreter uses the specializing interpreter's deopt to naturally form the side exit chains. This allows it to side exit chain effectively, without repeating much code. We force a re-specializing when tracing a deopt. * The tracing interpreter can even handle goto errors/exceptions, but I chose to disable them for now as it's not tested. * Because we do not share interpreter dispatch, there is should be no significant slowdown to the original specializing interpreter on tailcall and computed got with JIT disabled. With JIT enabled, there might be a slowdown in the form of the JIT trying to trace. * Things that could have dynamic instruction pointer effects are guarded on. The guard deopts to a new instruction --- `_DYNAMIC_EXIT`.
3486 lines
118 KiB
C
Generated
3486 lines
118 KiB
C
Generated
// This file is generated by Tools/cases_generator/optimizer_generator.py
|
|
// from:
|
|
// Python/optimizer_bytecodes.c
|
|
// Do not edit!
|
|
|
|
case _NOP: {
|
|
break;
|
|
}
|
|
|
|
case _CHECK_PERIODIC: {
|
|
break;
|
|
}
|
|
|
|
/* _CHECK_PERIODIC_AT_END is not a viable micro-op for tier 2 */
|
|
|
|
case _CHECK_PERIODIC_IF_NOT_YIELD_FROM: {
|
|
break;
|
|
}
|
|
|
|
/* _QUICKEN_RESUME is not a viable micro-op for tier 2 */
|
|
|
|
/* _LOAD_BYTECODE is not a viable micro-op for tier 2 */
|
|
|
|
case _RESUME_CHECK: {
|
|
break;
|
|
}
|
|
|
|
/* _MONITOR_RESUME is not a viable micro-op for tier 2 */
|
|
|
|
case _LOAD_FAST_CHECK: {
|
|
JitOptRef value;
|
|
value = GETLOCAL(oparg);
|
|
if (sym_is_null(value)) {
|
|
ctx->done = true;
|
|
}
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_FAST: {
|
|
JitOptRef value;
|
|
value = GETLOCAL(oparg);
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_FAST_BORROW: {
|
|
JitOptRef value;
|
|
value = PyJitRef_Borrow(GETLOCAL(oparg));
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_FAST_AND_CLEAR: {
|
|
JitOptRef value;
|
|
value = GETLOCAL(oparg);
|
|
JitOptRef temp = sym_new_null(ctx);
|
|
GETLOCAL(oparg) = temp;
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_CONST: {
|
|
JitOptRef value;
|
|
PyCodeObject *co = get_current_code_object(ctx);
|
|
PyObject *val = PyTuple_GET_ITEM(co->co_consts, oparg);
|
|
REPLACE_OP(this_instr, _LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)val);
|
|
value = PyJitRef_Borrow(sym_new_const(ctx, val));
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_SMALL_INT: {
|
|
JitOptRef value;
|
|
PyObject *val = PyLong_FromLong(oparg);
|
|
assert(val);
|
|
assert(_Py_IsImmortal(val));
|
|
REPLACE_OP(this_instr, _LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)val);
|
|
value = PyJitRef_Borrow(sym_new_const(ctx, val));
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_FAST: {
|
|
JitOptRef value;
|
|
value = stack_pointer[-1];
|
|
GETLOCAL(oparg) = value;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_TOP: {
|
|
JitOptRef value;
|
|
value = stack_pointer[-1];
|
|
PyTypeObject *typ = sym_get_type(value);
|
|
if (PyJitRef_IsBorrowed(value) ||
|
|
sym_is_immortal(PyJitRef_Unwrap(value)) ||
|
|
sym_is_null(value)) {
|
|
REPLACE_OP(this_instr, _POP_TOP_NOP, 0, 0);
|
|
}
|
|
else if (typ == &PyLong_Type) {
|
|
REPLACE_OP(this_instr, _POP_TOP_INT, 0, 0);
|
|
}
|
|
else if (typ == &PyFloat_Type) {
|
|
REPLACE_OP(this_instr, _POP_TOP_FLOAT, 0, 0);
|
|
}
|
|
else if (typ == &PyUnicode_Type) {
|
|
REPLACE_OP(this_instr, _POP_TOP_UNICODE, 0, 0);
|
|
}
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_TOP_NOP: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_TOP_INT: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_TOP_FLOAT: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_TOP_UNICODE: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_TWO: {
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _PUSH_NULL: {
|
|
JitOptRef res;
|
|
res = sym_new_null(ctx);
|
|
stack_pointer[0] = res;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _END_FOR: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_ITER: {
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _END_SEND: {
|
|
JitOptRef val;
|
|
val = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = val;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _UNARY_NEGATIVE: {
|
|
JitOptRef value;
|
|
JitOptRef res;
|
|
value = stack_pointer[-1];
|
|
if (
|
|
sym_is_safe_const(ctx, value)
|
|
) {
|
|
JitOptRef value_sym = value;
|
|
_PyStackRef value = sym_get_const_as_stackref(ctx, value_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *res_o = PyNumber_Negative(PyStackRef_AsPyObjectBorrow(value));
|
|
PyStackRef_CLOSE(value);
|
|
if (res_o == NULL) {
|
|
goto error;
|
|
}
|
|
res_stackref = PyStackRef_FromPyObjectSteal(res_o);
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TOP_LOAD_CONST_INLINE_BORROW since we have one input and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TOP_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
if (sym_is_compact_int(value)) {
|
|
res = sym_new_compact_int(ctx);
|
|
}
|
|
else {
|
|
PyTypeObject *type = sym_get_type(value);
|
|
if (type == &PyLong_Type || type == &PyFloat_Type) {
|
|
res = sym_new_type(ctx, type);
|
|
}
|
|
else {
|
|
res = sym_new_not_null(ctx);
|
|
}
|
|
}
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _UNARY_NOT: {
|
|
JitOptRef value;
|
|
JitOptRef res;
|
|
value = stack_pointer[-1];
|
|
if (
|
|
sym_is_safe_const(ctx, value)
|
|
) {
|
|
JitOptRef value_sym = value;
|
|
_PyStackRef value = sym_get_const_as_stackref(ctx, value_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
assert(PyStackRef_BoolCheck(value));
|
|
res_stackref = PyStackRef_IsFalse(value)
|
|
? PyStackRef_True : PyStackRef_False;
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TOP_LOAD_CONST_INLINE_BORROW since we have one input and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TOP_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
sym_set_type(value, &PyBool_Type);
|
|
res = sym_new_truthiness(ctx, value, false);
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _TO_BOOL: {
|
|
JitOptRef value;
|
|
JitOptRef res;
|
|
value = stack_pointer[-1];
|
|
int already_bool = optimize_to_bool(this_instr, ctx, value, &res);
|
|
if (!already_bool) {
|
|
res = sym_new_truthiness(ctx, value, true);
|
|
}
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _TO_BOOL_BOOL: {
|
|
JitOptRef value;
|
|
value = stack_pointer[-1];
|
|
int already_bool = optimize_to_bool(this_instr, ctx, value, &value);
|
|
if (!already_bool) {
|
|
sym_set_type(value, &PyBool_Type);
|
|
}
|
|
stack_pointer[-1] = value;
|
|
break;
|
|
}
|
|
|
|
case _TO_BOOL_INT: {
|
|
JitOptRef value;
|
|
JitOptRef res;
|
|
value = stack_pointer[-1];
|
|
int already_bool = optimize_to_bool(this_instr, ctx, value, &res);
|
|
if (!already_bool) {
|
|
sym_set_type(value, &PyLong_Type);
|
|
res = sym_new_truthiness(ctx, value, true);
|
|
}
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _GUARD_NOS_LIST: {
|
|
JitOptRef nos;
|
|
nos = stack_pointer[-2];
|
|
if (sym_matches_type(nos, &PyList_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(nos, &PyList_Type);
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TOS_LIST: {
|
|
JitOptRef tos;
|
|
tos = stack_pointer[-1];
|
|
if (sym_matches_type(tos, &PyList_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(tos, &PyList_Type);
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TOS_SLICE: {
|
|
break;
|
|
}
|
|
|
|
case _TO_BOOL_LIST: {
|
|
JitOptRef value;
|
|
JitOptRef res;
|
|
value = stack_pointer[-1];
|
|
int already_bool = optimize_to_bool(this_instr, ctx, value, &res);
|
|
if (!already_bool) {
|
|
res = sym_new_type(ctx, &PyBool_Type);
|
|
}
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _TO_BOOL_NONE: {
|
|
JitOptRef value;
|
|
JitOptRef res;
|
|
value = stack_pointer[-1];
|
|
int already_bool = optimize_to_bool(this_instr, ctx, value, &res);
|
|
if (!already_bool) {
|
|
sym_set_const(value, Py_None);
|
|
res = sym_new_const(ctx, Py_False);
|
|
}
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _GUARD_NOS_UNICODE: {
|
|
JitOptRef nos;
|
|
nos = stack_pointer[-2];
|
|
if (sym_matches_type(nos, &PyUnicode_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(nos, &PyUnicode_Type);
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TOS_UNICODE: {
|
|
JitOptRef value;
|
|
value = stack_pointer[-1];
|
|
if (sym_matches_type(value, &PyUnicode_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(value, &PyUnicode_Type);
|
|
break;
|
|
}
|
|
|
|
case _TO_BOOL_STR: {
|
|
JitOptRef value;
|
|
JitOptRef res;
|
|
value = stack_pointer[-1];
|
|
int already_bool = optimize_to_bool(this_instr, ctx, value, &res);
|
|
if (!already_bool) {
|
|
res = sym_new_truthiness(ctx, value, true);
|
|
}
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _REPLACE_WITH_TRUE: {
|
|
JitOptRef res;
|
|
REPLACE_OP(this_instr, _POP_TOP_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)Py_True);
|
|
res = sym_new_const(ctx, Py_True);
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _UNARY_INVERT: {
|
|
JitOptRef value;
|
|
JitOptRef res;
|
|
value = stack_pointer[-1];
|
|
if (!sym_matches_type(value, &PyBool_Type)) {
|
|
if (
|
|
sym_is_safe_const(ctx, value)
|
|
) {
|
|
JitOptRef value_sym = value;
|
|
_PyStackRef value = sym_get_const_as_stackref(ctx, value_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *res_o = PyNumber_Invert(PyStackRef_AsPyObjectBorrow(value));
|
|
PyStackRef_CLOSE(value);
|
|
if (res_o == NULL) {
|
|
goto error;
|
|
}
|
|
res_stackref = PyStackRef_FromPyObjectSteal(res_o);
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TOP_LOAD_CONST_INLINE_BORROW since we have one input and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TOP_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
}
|
|
if (sym_matches_type(value, &PyLong_Type)) {
|
|
res = sym_new_type(ctx, &PyLong_Type);
|
|
}
|
|
else {
|
|
res = sym_new_not_null(ctx);
|
|
}
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _GUARD_NOS_INT: {
|
|
JitOptRef left;
|
|
left = stack_pointer[-2];
|
|
if (sym_is_compact_int(left)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
else {
|
|
if (sym_get_type(left) == &PyLong_Type) {
|
|
REPLACE_OP(this_instr, _GUARD_NOS_OVERFLOWED, 0, 0);
|
|
}
|
|
sym_set_compact_int(left);
|
|
}
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TOS_INT: {
|
|
JitOptRef value;
|
|
value = stack_pointer[-1];
|
|
if (sym_is_compact_int(value)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
else {
|
|
if (sym_get_type(value) == &PyLong_Type) {
|
|
REPLACE_OP(this_instr, _GUARD_TOS_OVERFLOWED, 0, 0);
|
|
}
|
|
sym_set_compact_int(value);
|
|
}
|
|
break;
|
|
}
|
|
|
|
case _GUARD_NOS_OVERFLOWED: {
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TOS_OVERFLOWED: {
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_MULTIPLY_INT: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
assert(PyLong_CheckExact(left_o));
|
|
assert(PyLong_CheckExact(right_o));
|
|
assert(_PyLong_BothAreCompact((PyLongObject *)left_o, (PyLongObject *)right_o));
|
|
STAT_INC(BINARY_OP, hit);
|
|
res_stackref = _PyCompactLong_Multiply((PyLongObject *)left_o, (PyLongObject *)right_o);
|
|
if (PyStackRef_IsNull(res_stackref )) {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
PyStackRef_CLOSE_SPECIALIZED(right, _PyLong_ExactDealloc);
|
|
PyStackRef_CLOSE_SPECIALIZED(left, _PyLong_ExactDealloc);
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
res = sym_new_compact_int(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_ADD_INT: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
assert(PyLong_CheckExact(left_o));
|
|
assert(PyLong_CheckExact(right_o));
|
|
assert(_PyLong_BothAreCompact((PyLongObject *)left_o, (PyLongObject *)right_o));
|
|
STAT_INC(BINARY_OP, hit);
|
|
res_stackref = _PyCompactLong_Add((PyLongObject *)left_o, (PyLongObject *)right_o);
|
|
if (PyStackRef_IsNull(res_stackref )) {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
PyStackRef_CLOSE_SPECIALIZED(right, _PyLong_ExactDealloc);
|
|
PyStackRef_CLOSE_SPECIALIZED(left, _PyLong_ExactDealloc);
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
res = sym_new_compact_int(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_SUBTRACT_INT: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
assert(PyLong_CheckExact(left_o));
|
|
assert(PyLong_CheckExact(right_o));
|
|
assert(_PyLong_BothAreCompact((PyLongObject *)left_o, (PyLongObject *)right_o));
|
|
STAT_INC(BINARY_OP, hit);
|
|
res_stackref = _PyCompactLong_Subtract((PyLongObject *)left_o, (PyLongObject *)right_o);
|
|
if (PyStackRef_IsNull(res_stackref )) {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
PyStackRef_CLOSE_SPECIALIZED(right, _PyLong_ExactDealloc);
|
|
PyStackRef_CLOSE_SPECIALIZED(left, _PyLong_ExactDealloc);
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
res = sym_new_compact_int(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_NOS_FLOAT: {
|
|
JitOptRef left;
|
|
left = stack_pointer[-2];
|
|
if (sym_matches_type(left, &PyFloat_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(left, &PyFloat_Type);
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TOS_FLOAT: {
|
|
JitOptRef value;
|
|
value = stack_pointer[-1];
|
|
if (sym_matches_type(value, &PyFloat_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(value, &PyFloat_Type);
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_MULTIPLY_FLOAT: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
assert(PyFloat_CheckExact(left_o));
|
|
assert(PyFloat_CheckExact(right_o));
|
|
STAT_INC(BINARY_OP, hit);
|
|
double dres =
|
|
((PyFloatObject *)left_o)->ob_fval *
|
|
((PyFloatObject *)right_o)->ob_fval;
|
|
res_stackref = _PyFloat_FromDouble_ConsumeInputs(left, right, dres);
|
|
if (PyStackRef_IsNull(res_stackref )) {
|
|
goto error;
|
|
}
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
res = sym_new_type(ctx, &PyFloat_Type);
|
|
if (PyJitRef_IsBorrowed(left) && PyJitRef_IsBorrowed(right)) {
|
|
REPLACE_OP(this_instr, op_without_decref_inputs[opcode], oparg, 0);
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_ADD_FLOAT: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
assert(PyFloat_CheckExact(left_o));
|
|
assert(PyFloat_CheckExact(right_o));
|
|
STAT_INC(BINARY_OP, hit);
|
|
double dres =
|
|
((PyFloatObject *)left_o)->ob_fval +
|
|
((PyFloatObject *)right_o)->ob_fval;
|
|
res_stackref = _PyFloat_FromDouble_ConsumeInputs(left, right, dres);
|
|
if (PyStackRef_IsNull(res_stackref )) {
|
|
goto error;
|
|
}
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
res = sym_new_type(ctx, &PyFloat_Type);
|
|
if (PyJitRef_IsBorrowed(left) && PyJitRef_IsBorrowed(right)) {
|
|
REPLACE_OP(this_instr, op_without_decref_inputs[opcode], oparg, 0);
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_SUBTRACT_FLOAT: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
assert(PyFloat_CheckExact(left_o));
|
|
assert(PyFloat_CheckExact(right_o));
|
|
STAT_INC(BINARY_OP, hit);
|
|
double dres =
|
|
((PyFloatObject *)left_o)->ob_fval -
|
|
((PyFloatObject *)right_o)->ob_fval;
|
|
res_stackref = _PyFloat_FromDouble_ConsumeInputs(left, right, dres);
|
|
if (PyStackRef_IsNull(res_stackref )) {
|
|
goto error;
|
|
}
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
res = sym_new_type(ctx, &PyFloat_Type);
|
|
if (PyJitRef_IsBorrowed(left) && PyJitRef_IsBorrowed(right)) {
|
|
REPLACE_OP(this_instr, op_without_decref_inputs[opcode], oparg, 0);
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_MULTIPLY_FLOAT__NO_DECREF_INPUTS: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_ADD_FLOAT__NO_DECREF_INPUTS: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_SUBTRACT_FLOAT__NO_DECREF_INPUTS: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_ADD_UNICODE: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
assert(PyUnicode_CheckExact(left_o));
|
|
assert(PyUnicode_CheckExact(right_o));
|
|
STAT_INC(BINARY_OP, hit);
|
|
PyObject *res_o = PyUnicode_Concat(left_o, right_o);
|
|
PyStackRef_CLOSE_SPECIALIZED(right, _PyUnicode_ExactDealloc);
|
|
PyStackRef_CLOSE_SPECIALIZED(left, _PyUnicode_ExactDealloc);
|
|
if (res_o == NULL) {
|
|
goto error;
|
|
}
|
|
res_stackref = PyStackRef_FromPyObjectSteal(res_o);
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
res = sym_new_type(ctx, &PyUnicode_Type);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_INPLACE_ADD_UNICODE: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
JitOptRef res;
|
|
if (sym_is_const(ctx, left) && sym_is_const(ctx, right)) {
|
|
assert(PyUnicode_CheckExact(sym_get_const(ctx, left)));
|
|
assert(PyUnicode_CheckExact(sym_get_const(ctx, right)));
|
|
PyObject *temp = PyUnicode_Concat(sym_get_const(ctx, left), sym_get_const(ctx, right));
|
|
if (temp == NULL) {
|
|
goto error;
|
|
}
|
|
res = sym_new_const(ctx, temp);
|
|
Py_DECREF(temp);
|
|
}
|
|
else {
|
|
res = sym_new_type(ctx, &PyUnicode_Type);
|
|
}
|
|
GETLOCAL(this_instr->operand0) = res;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_BINARY_OP_EXTEND: {
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_EXTEND: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_SLICE: {
|
|
JitOptRef container;
|
|
JitOptRef res;
|
|
container = stack_pointer[-3];
|
|
PyTypeObject *type = sym_get_type(container);
|
|
if (type == &PyUnicode_Type ||
|
|
type == &PyList_Type ||
|
|
type == &PyTuple_Type)
|
|
{
|
|
res = sym_new_type(ctx, type);
|
|
}
|
|
else {
|
|
res = sym_new_not_null(ctx);
|
|
}
|
|
stack_pointer[-3] = res;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_SLICE: {
|
|
stack_pointer += -4;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_SUBSCR_LIST_INT: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_SUBSCR_LIST_SLICE: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_SUBSCR_STR_INT: {
|
|
JitOptRef res;
|
|
res = sym_new_type(ctx, &PyUnicode_Type);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_NOS_TUPLE: {
|
|
JitOptRef nos;
|
|
nos = stack_pointer[-2];
|
|
if (sym_matches_type(nos, &PyTuple_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(nos, &PyTuple_Type);
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TOS_TUPLE: {
|
|
JitOptRef tos;
|
|
tos = stack_pointer[-1];
|
|
if (sym_matches_type(tos, &PyTuple_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(tos, &PyTuple_Type);
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_SUBSCR_TUPLE_INT: {
|
|
JitOptRef sub_st;
|
|
JitOptRef tuple_st;
|
|
JitOptRef res;
|
|
sub_st = stack_pointer[-1];
|
|
tuple_st = stack_pointer[-2];
|
|
assert(sym_matches_type(tuple_st, &PyTuple_Type));
|
|
if (sym_is_const(ctx, sub_st)) {
|
|
assert(PyLong_CheckExact(sym_get_const(ctx, sub_st)));
|
|
long index = PyLong_AsLong(sym_get_const(ctx, sub_st));
|
|
assert(index >= 0);
|
|
Py_ssize_t tuple_length = sym_tuple_length(tuple_st);
|
|
if (tuple_length == -1) {
|
|
res = sym_new_not_null(ctx);
|
|
}
|
|
else {
|
|
assert(index < tuple_length);
|
|
res = sym_tuple_getitem(ctx, tuple_st, index);
|
|
}
|
|
}
|
|
else {
|
|
res = sym_new_not_null(ctx);
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_NOS_DICT: {
|
|
JitOptRef nos;
|
|
nos = stack_pointer[-2];
|
|
if (sym_matches_type(nos, &PyDict_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(nos, &PyDict_Type);
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TOS_DICT: {
|
|
JitOptRef tos;
|
|
tos = stack_pointer[-1];
|
|
if (sym_matches_type(tos, &PyDict_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(tos, &PyDict_Type);
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_SUBSCR_DICT: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_SUBSCR_CHECK_FUNC: {
|
|
JitOptRef getitem;
|
|
getitem = sym_new_not_null(ctx);
|
|
stack_pointer[0] = getitem;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP_SUBSCR_INIT_CALL: {
|
|
JitOptRef new_frame;
|
|
new_frame = PyJitRef_NULL;
|
|
ctx->done = true;
|
|
stack_pointer[-3] = new_frame;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LIST_APPEND: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _SET_ADD: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_SUBSCR: {
|
|
stack_pointer += -3;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_SUBSCR_LIST_INT: {
|
|
stack_pointer += -3;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_SUBSCR_DICT: {
|
|
stack_pointer += -3;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _DELETE_SUBSCR: {
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CALL_INTRINSIC_1: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _CALL_INTRINSIC_2: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _RETURN_VALUE: {
|
|
JitOptRef retval;
|
|
JitOptRef res;
|
|
retval = stack_pointer[-1];
|
|
JitOptRef temp = PyJitRef_StripReferenceInfo(retval);
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
ctx->frame->stack_pointer = stack_pointer;
|
|
PyCodeObject *returning_code = get_code_with_logging(this_instr);
|
|
if (returning_code == NULL) {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
int returning_stacklevel = this_instr->operand1;
|
|
if (ctx->curr_frame_depth >= 2) {
|
|
PyCodeObject *expected_code = ctx->frames[ctx->curr_frame_depth - 2].code;
|
|
if (expected_code == returning_code) {
|
|
assert((this_instr + 1)->opcode == _GUARD_IP_RETURN_VALUE);
|
|
REPLACE_OP((this_instr + 1), _NOP, 0, 0);
|
|
}
|
|
}
|
|
if (frame_pop(ctx, returning_code, returning_stacklevel)) {
|
|
break;
|
|
}
|
|
stack_pointer = ctx->frame->stack_pointer;
|
|
res = temp;
|
|
stack_pointer[0] = res;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GET_AITER: {
|
|
JitOptRef iter;
|
|
iter = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = iter;
|
|
break;
|
|
}
|
|
|
|
case _GET_ANEXT: {
|
|
JitOptRef awaitable;
|
|
awaitable = sym_new_not_null(ctx);
|
|
stack_pointer[0] = awaitable;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GET_AWAITABLE: {
|
|
JitOptRef iter;
|
|
iter = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = iter;
|
|
break;
|
|
}
|
|
|
|
/* _SEND is not a viable micro-op for tier 2 */
|
|
|
|
case _SEND_GEN_FRAME: {
|
|
JitOptRef gen_frame;
|
|
gen_frame = PyJitRef_NULL;
|
|
ctx->done = true;
|
|
stack_pointer[-1] = gen_frame;
|
|
break;
|
|
}
|
|
|
|
case _YIELD_VALUE: {
|
|
JitOptRef retval;
|
|
JitOptRef value;
|
|
retval = stack_pointer[-1];
|
|
JitOptRef temp = PyJitRef_StripReferenceInfo(retval);
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
ctx->frame->stack_pointer = stack_pointer;
|
|
PyCodeObject *returning_code = get_code_with_logging(this_instr);
|
|
if (returning_code == NULL) {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
_Py_BloomFilter_Add(dependencies, returning_code);
|
|
int returning_stacklevel = this_instr->operand1;
|
|
if (frame_pop(ctx, returning_code, returning_stacklevel)) {
|
|
break;
|
|
}
|
|
stack_pointer = ctx->frame->stack_pointer;
|
|
value = temp;
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_EXCEPT: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_COMMON_CONSTANT: {
|
|
JitOptRef value;
|
|
value = sym_new_not_null(ctx);
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_BUILD_CLASS: {
|
|
JitOptRef bc;
|
|
bc = sym_new_not_null(ctx);
|
|
stack_pointer[0] = bc;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_NAME: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _DELETE_NAME: {
|
|
break;
|
|
}
|
|
|
|
case _UNPACK_SEQUENCE: {
|
|
JitOptRef *values;
|
|
JitOptRef *top;
|
|
values = &stack_pointer[-1];
|
|
top = &stack_pointer[-1 + oparg];
|
|
(void)top;
|
|
for (int i = 0; i < oparg; i++) {
|
|
values[i] = sym_new_unknown(ctx);
|
|
}
|
|
stack_pointer += -1 + oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _UNPACK_SEQUENCE_TWO_TUPLE: {
|
|
JitOptRef seq;
|
|
JitOptRef val1;
|
|
JitOptRef val0;
|
|
seq = stack_pointer[-1];
|
|
val0 = sym_tuple_getitem(ctx, seq, 0);
|
|
val1 = sym_tuple_getitem(ctx, seq, 1);
|
|
stack_pointer[-1] = val1;
|
|
stack_pointer[0] = val0;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _UNPACK_SEQUENCE_TUPLE: {
|
|
JitOptRef seq;
|
|
JitOptRef *values;
|
|
seq = stack_pointer[-1];
|
|
values = &stack_pointer[-1];
|
|
for (int i = 0; i < oparg; i++) {
|
|
values[i] = sym_tuple_getitem(ctx, seq, oparg - i - 1);
|
|
}
|
|
stack_pointer += -1 + oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _UNPACK_SEQUENCE_LIST: {
|
|
JitOptRef *values;
|
|
values = &stack_pointer[-1];
|
|
for (int _i = oparg; --_i >= 0;) {
|
|
values[_i] = sym_new_not_null(ctx);
|
|
}
|
|
stack_pointer += -1 + oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _UNPACK_EX: {
|
|
JitOptRef *values;
|
|
JitOptRef *top;
|
|
values = &stack_pointer[-1];
|
|
top = &stack_pointer[(oparg & 0xFF) + (oparg >> 8)];
|
|
(void)top;
|
|
int totalargs = (oparg & 0xFF) + (oparg >> 8) + 1;
|
|
for (int i = 0; i < totalargs; i++) {
|
|
values[i] = sym_new_unknown(ctx);
|
|
}
|
|
stack_pointer += (oparg & 0xFF) + (oparg >> 8);
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_ATTR: {
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _DELETE_ATTR: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_GLOBAL: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _DELETE_GLOBAL: {
|
|
break;
|
|
}
|
|
|
|
case _LOAD_LOCALS: {
|
|
JitOptRef locals;
|
|
locals = sym_new_not_null(ctx);
|
|
stack_pointer[0] = locals;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
/* _LOAD_FROM_DICT_OR_GLOBALS is not a viable micro-op for tier 2 */
|
|
|
|
case _LOAD_NAME: {
|
|
JitOptRef v;
|
|
v = sym_new_not_null(ctx);
|
|
stack_pointer[0] = v;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_GLOBAL: {
|
|
JitOptRef *res;
|
|
res = &stack_pointer[0];
|
|
res[0] = sym_new_not_null(ctx);
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _PUSH_NULL_CONDITIONAL: {
|
|
JitOptRef *null;
|
|
null = &stack_pointer[0];
|
|
if (oparg & 1) {
|
|
REPLACE_OP(this_instr, _PUSH_NULL, 0, 0);
|
|
null[0] = sym_new_null(ctx);
|
|
}
|
|
else {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
stack_pointer += (oparg & 1);
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_GLOBALS_VERSION: {
|
|
uint16_t version = (uint16_t)this_instr->operand0;
|
|
if (ctx->frame->func != NULL) {
|
|
PyObject *globals = ctx->frame->func->func_globals;
|
|
if (incorrect_keys(globals, version)) {
|
|
OPT_STAT_INC(remove_globals_incorrect_keys);
|
|
ctx->done = true;
|
|
}
|
|
else if (get_mutations(globals) >= _Py_MAX_ALLOWED_GLOBALS_MODIFICATIONS) {
|
|
}
|
|
else {
|
|
if (!ctx->frame->globals_watched) {
|
|
PyDict_Watch(GLOBALS_WATCHER_ID, globals);
|
|
_Py_BloomFilter_Add(dependencies, globals);
|
|
ctx->frame->globals_watched = true;
|
|
}
|
|
if (ctx->frame->globals_checked_version == version) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
}
|
|
}
|
|
ctx->frame->globals_checked_version = version;
|
|
break;
|
|
}
|
|
|
|
case _LOAD_GLOBAL_MODULE: {
|
|
JitOptRef res;
|
|
uint16_t version = (uint16_t)this_instr->operand0;
|
|
uint16_t index = (uint16_t)this_instr->operand0;
|
|
(void)index;
|
|
PyObject *cnst = NULL;
|
|
if (ctx->frame->func != NULL) {
|
|
PyObject *globals = ctx->frame->func->func_globals;
|
|
if (incorrect_keys(globals, version)) {
|
|
OPT_STAT_INC(remove_globals_incorrect_keys);
|
|
ctx->done = true;
|
|
}
|
|
else if (get_mutations(globals) >= _Py_MAX_ALLOWED_GLOBALS_MODIFICATIONS) {
|
|
}
|
|
else {
|
|
if (!ctx->frame->globals_watched) {
|
|
PyDict_Watch(GLOBALS_WATCHER_ID, globals);
|
|
_Py_BloomFilter_Add(dependencies, globals);
|
|
ctx->frame->globals_watched = true;
|
|
}
|
|
if (ctx->frame->globals_checked_version != version && this_instr[-1].opcode == _NOP) {
|
|
REPLACE_OP(this_instr-1, _GUARD_GLOBALS_VERSION, 0, version);
|
|
ctx->frame->globals_checked_version = version;
|
|
}
|
|
if (ctx->frame->globals_checked_version == version) {
|
|
cnst = convert_global_to_const(this_instr, globals, false);
|
|
}
|
|
}
|
|
}
|
|
if (cnst == NULL) {
|
|
res = sym_new_not_null(ctx);
|
|
}
|
|
else {
|
|
res = sym_new_const(ctx, cnst);
|
|
}
|
|
stack_pointer[0] = res;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_GLOBAL_BUILTINS: {
|
|
JitOptRef res;
|
|
uint16_t version = (uint16_t)this_instr->operand0;
|
|
uint16_t index = (uint16_t)this_instr->operand0;
|
|
(void)version;
|
|
(void)index;
|
|
PyObject *cnst = NULL;
|
|
PyInterpreterState *interp = _PyInterpreterState_GET();
|
|
PyObject *builtins = interp->builtins;
|
|
if (incorrect_keys(builtins, version)) {
|
|
OPT_STAT_INC(remove_globals_incorrect_keys);
|
|
ctx->done = true;
|
|
}
|
|
else if (interp->rare_events.builtin_dict >= _Py_MAX_ALLOWED_BUILTINS_MODIFICATIONS) {
|
|
}
|
|
else {
|
|
if (!ctx->builtins_watched) {
|
|
PyDict_Watch(BUILTINS_WATCHER_ID, builtins);
|
|
ctx->builtins_watched = true;
|
|
}
|
|
if (ctx->frame->globals_checked_version != 0 && ctx->frame->globals_watched) {
|
|
cnst = convert_global_to_const(this_instr, builtins, false);
|
|
}
|
|
}
|
|
if (cnst == NULL) {
|
|
res = sym_new_not_null(ctx);
|
|
}
|
|
else {
|
|
res = sym_new_const(ctx, cnst);
|
|
}
|
|
stack_pointer[0] = res;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _DELETE_FAST: {
|
|
break;
|
|
}
|
|
|
|
case _MAKE_CELL: {
|
|
break;
|
|
}
|
|
|
|
case _DELETE_DEREF: {
|
|
break;
|
|
}
|
|
|
|
case _LOAD_FROM_DICT_OR_DEREF: {
|
|
JitOptRef value;
|
|
value = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = value;
|
|
break;
|
|
}
|
|
|
|
case _LOAD_DEREF: {
|
|
JitOptRef value;
|
|
value = sym_new_not_null(ctx);
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_DEREF: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _COPY_FREE_VARS: {
|
|
break;
|
|
}
|
|
|
|
case _BUILD_STRING: {
|
|
JitOptRef str;
|
|
str = sym_new_type(ctx, &PyUnicode_Type);
|
|
stack_pointer[-oparg] = str;
|
|
stack_pointer += 1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BUILD_INTERPOLATION: {
|
|
JitOptRef interpolation;
|
|
interpolation = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - (oparg & 1)] = interpolation;
|
|
stack_pointer += -1 - (oparg & 1);
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BUILD_TEMPLATE: {
|
|
JitOptRef template;
|
|
template = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = template;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BUILD_TUPLE: {
|
|
JitOptRef *values;
|
|
JitOptRef tup;
|
|
values = &stack_pointer[-oparg];
|
|
tup = sym_new_tuple(ctx, oparg, values);
|
|
stack_pointer[-oparg] = tup;
|
|
stack_pointer += 1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BUILD_LIST: {
|
|
JitOptRef list;
|
|
list = sym_new_type(ctx, &PyList_Type);
|
|
stack_pointer[-oparg] = list;
|
|
stack_pointer += 1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LIST_EXTEND: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _SET_UPDATE: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BUILD_SET: {
|
|
JitOptRef set;
|
|
set = sym_new_type(ctx, &PySet_Type);
|
|
stack_pointer[-oparg] = set;
|
|
stack_pointer += 1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BUILD_MAP: {
|
|
JitOptRef map;
|
|
map = sym_new_type(ctx, &PyDict_Type);
|
|
stack_pointer[-oparg*2] = map;
|
|
stack_pointer += 1 - oparg*2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _SETUP_ANNOTATIONS: {
|
|
break;
|
|
}
|
|
|
|
case _DICT_UPDATE: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _DICT_MERGE: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _MAP_ADD: {
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_SUPER_ATTR_ATTR: {
|
|
JitOptRef attr_st;
|
|
attr_st = sym_new_not_null(ctx);
|
|
stack_pointer[-3] = attr_st;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_SUPER_ATTR_METHOD: {
|
|
JitOptRef attr;
|
|
JitOptRef self_or_null;
|
|
attr = sym_new_not_null(ctx);
|
|
self_or_null = sym_new_not_null(ctx);
|
|
stack_pointer[-3] = attr;
|
|
stack_pointer[-2] = self_or_null;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR: {
|
|
JitOptRef owner;
|
|
JitOptRef *attr;
|
|
JitOptRef *self_or_null;
|
|
owner = stack_pointer[-1];
|
|
attr = &stack_pointer[-1];
|
|
self_or_null = &stack_pointer[0];
|
|
(void)owner;
|
|
*attr = sym_new_not_null(ctx);
|
|
if (oparg & 1) {
|
|
self_or_null[0] = sym_new_unknown(ctx);
|
|
}
|
|
stack_pointer += (oparg&1);
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TYPE_VERSION: {
|
|
JitOptRef owner;
|
|
owner = stack_pointer[-1];
|
|
uint32_t type_version = (uint32_t)this_instr->operand0;
|
|
assert(type_version);
|
|
if (sym_matches_type_version(owner, type_version)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
} else {
|
|
PyTypeObject *type = _PyType_LookupByVersion(type_version);
|
|
if (type) {
|
|
if (sym_set_type_version(owner, type_version)) {
|
|
PyType_Watch(TYPE_WATCHER_ID, (PyObject *)type);
|
|
_Py_BloomFilter_Add(dependencies, type);
|
|
}
|
|
}
|
|
}
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TYPE_VERSION_AND_LOCK: {
|
|
break;
|
|
}
|
|
|
|
case _CHECK_MANAGED_OBJECT_HAS_VALUES: {
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_INSTANCE_VALUE: {
|
|
JitOptRef attr;
|
|
uint16_t offset = (uint16_t)this_instr->operand0;
|
|
attr = sym_new_not_null(ctx);
|
|
(void)offset;
|
|
stack_pointer[-1] = attr;
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_MODULE: {
|
|
JitOptRef owner;
|
|
JitOptRef attr;
|
|
owner = stack_pointer[-1];
|
|
uint32_t dict_version = (uint32_t)this_instr->operand0;
|
|
uint16_t index = (uint16_t)this_instr->operand0;
|
|
(void)dict_version;
|
|
(void)index;
|
|
attr = PyJitRef_NULL;
|
|
if (sym_is_const(ctx, owner)) {
|
|
PyModuleObject *mod = (PyModuleObject *)sym_get_const(ctx, owner);
|
|
if (PyModule_CheckExact(mod)) {
|
|
PyObject *dict = mod->md_dict;
|
|
stack_pointer[-1] = attr;
|
|
uint64_t watched_mutations = get_mutations(dict);
|
|
if (watched_mutations < _Py_MAX_ALLOWED_GLOBALS_MODIFICATIONS) {
|
|
PyDict_Watch(GLOBALS_WATCHER_ID, dict);
|
|
_Py_BloomFilter_Add(dependencies, dict);
|
|
PyObject *res = convert_global_to_const(this_instr, dict, true);
|
|
if (res == NULL) {
|
|
attr = sym_new_not_null(ctx);
|
|
}
|
|
else {
|
|
attr = sym_new_const(ctx, res);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
if (PyJitRef_IsNull(attr)) {
|
|
attr = sym_new_not_null(ctx);
|
|
}
|
|
stack_pointer[-1] = attr;
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_WITH_HINT: {
|
|
JitOptRef attr;
|
|
uint16_t hint = (uint16_t)this_instr->operand0;
|
|
attr = sym_new_not_null(ctx);
|
|
(void)hint;
|
|
stack_pointer[-1] = attr;
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_SLOT: {
|
|
JitOptRef attr;
|
|
uint16_t index = (uint16_t)this_instr->operand0;
|
|
attr = sym_new_not_null(ctx);
|
|
(void)index;
|
|
stack_pointer[-1] = attr;
|
|
break;
|
|
}
|
|
|
|
case _CHECK_ATTR_CLASS: {
|
|
JitOptRef owner;
|
|
owner = stack_pointer[-1];
|
|
uint32_t type_version = (uint32_t)this_instr->operand0;
|
|
PyObject *type = (PyObject *)_PyType_LookupByVersion(type_version);
|
|
if (type) {
|
|
if (type == sym_get_const(ctx, owner)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
else {
|
|
sym_set_const(owner, type);
|
|
}
|
|
}
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_CLASS: {
|
|
JitOptRef owner;
|
|
JitOptRef attr;
|
|
owner = stack_pointer[-1];
|
|
PyObject *descr = (PyObject *)this_instr->operand0;
|
|
(void)descr;
|
|
PyTypeObject *type = (PyTypeObject *)sym_get_const(ctx, owner);
|
|
PyObject *name = get_co_name(ctx, oparg >> 1);
|
|
attr = lookup_attr(ctx, this_instr, type, name,
|
|
_POP_TOP_LOAD_CONST_INLINE_BORROW,
|
|
_POP_TOP_LOAD_CONST_INLINE);
|
|
stack_pointer[-1] = attr;
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_PROPERTY_FRAME: {
|
|
JitOptRef new_frame;
|
|
PyObject *fget = (PyObject *)this_instr->operand0;
|
|
(void)fget;
|
|
new_frame = PyJitRef_NULL;
|
|
ctx->done = true;
|
|
stack_pointer[-1] = new_frame;
|
|
break;
|
|
}
|
|
|
|
/* _LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN is not a viable micro-op for tier 2 */
|
|
|
|
case _GUARD_DORV_NO_DICT: {
|
|
break;
|
|
}
|
|
|
|
case _STORE_ATTR_INSTANCE_VALUE: {
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_ATTR_WITH_HINT: {
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _STORE_ATTR_SLOT: {
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _COMPARE_OP: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
assert((oparg >> 5) <= Py_GE);
|
|
PyObject *res_o = PyObject_RichCompare(left_o, right_o, oparg >> 5);
|
|
if (res_o == NULL) {
|
|
goto error;
|
|
}
|
|
if (oparg & 16) {
|
|
int res_bool = PyObject_IsTrue(res_o);
|
|
Py_DECREF(res_o);
|
|
if (res_bool < 0) {
|
|
goto error;
|
|
}
|
|
res_stackref = res_bool ? PyStackRef_True : PyStackRef_False;
|
|
}
|
|
else {
|
|
res_stackref = PyStackRef_FromPyObjectSteal(res_o);
|
|
}
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
if (oparg & 16) {
|
|
res = sym_new_type(ctx, &PyBool_Type);
|
|
}
|
|
else {
|
|
res = _Py_uop_sym_new_not_null(ctx);
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _COMPARE_OP_FLOAT: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
STAT_INC(COMPARE_OP, hit);
|
|
double dleft = PyFloat_AS_DOUBLE(left_o);
|
|
double dright = PyFloat_AS_DOUBLE(right_o);
|
|
int sign_ish = COMPARISON_BIT(dleft, dright);
|
|
PyStackRef_CLOSE_SPECIALIZED(left, _PyFloat_ExactDealloc);
|
|
PyStackRef_CLOSE_SPECIALIZED(right, _PyFloat_ExactDealloc);
|
|
res_stackref = (sign_ish & oparg) ? PyStackRef_True : PyStackRef_False;
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
res = sym_new_type(ctx, &PyBool_Type);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _COMPARE_OP_INT: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
assert(_PyLong_IsCompact((PyLongObject *)left_o));
|
|
assert(_PyLong_IsCompact((PyLongObject *)right_o));
|
|
STAT_INC(COMPARE_OP, hit);
|
|
assert(_PyLong_DigitCount((PyLongObject *)left_o) <= 1 &&
|
|
_PyLong_DigitCount((PyLongObject *)right_o) <= 1);
|
|
Py_ssize_t ileft = _PyLong_CompactValue((PyLongObject *)left_o);
|
|
Py_ssize_t iright = _PyLong_CompactValue((PyLongObject *)right_o);
|
|
int sign_ish = COMPARISON_BIT(ileft, iright);
|
|
PyStackRef_CLOSE_SPECIALIZED(left, _PyLong_ExactDealloc);
|
|
PyStackRef_CLOSE_SPECIALIZED(right, _PyLong_ExactDealloc);
|
|
res_stackref = (sign_ish & oparg) ? PyStackRef_True : PyStackRef_False;
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
res = sym_new_type(ctx, &PyBool_Type);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _COMPARE_OP_STR: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef res;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
STAT_INC(COMPARE_OP, hit);
|
|
int eq = _PyUnicode_Equal(left_o, right_o);
|
|
assert((oparg >> 5) == Py_EQ || (oparg >> 5) == Py_NE);
|
|
PyStackRef_CLOSE_SPECIALIZED(left, _PyUnicode_ExactDealloc);
|
|
PyStackRef_CLOSE_SPECIALIZED(right, _PyUnicode_ExactDealloc);
|
|
assert(eq == 0 || eq == 1);
|
|
assert((oparg & 0xf) == COMPARISON_NOT_EQUALS || (oparg & 0xf) == COMPARISON_EQUALS);
|
|
assert(COMPARISON_NOT_EQUALS + 1 == COMPARISON_EQUALS);
|
|
res_stackref = ((COMPARISON_NOT_EQUALS + eq) & oparg) ? PyStackRef_True : PyStackRef_False;
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
res = sym_new_type(ctx, &PyBool_Type);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _IS_OP: {
|
|
JitOptRef b;
|
|
b = sym_new_type(ctx, &PyBool_Type);
|
|
stack_pointer[-2] = b;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CONTAINS_OP: {
|
|
JitOptRef right;
|
|
JitOptRef left;
|
|
JitOptRef b;
|
|
right = stack_pointer[-1];
|
|
left = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, left) &&
|
|
sym_is_safe_const(ctx, right)
|
|
) {
|
|
JitOptRef left_sym = left;
|
|
JitOptRef right_sym = right;
|
|
_PyStackRef left = sym_get_const_as_stackref(ctx, left_sym);
|
|
_PyStackRef right = sym_get_const_as_stackref(ctx, right_sym);
|
|
_PyStackRef b_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
|
|
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
|
|
int res = PySequence_Contains(right_o, left_o);
|
|
if (res < 0) {
|
|
goto error;
|
|
}
|
|
b_stackref = (res ^ oparg) ? PyStackRef_True : PyStackRef_False;
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
b = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(b_stackref));
|
|
if (sym_is_const(ctx, b)) {
|
|
PyObject *result = sym_get_const(ctx, b);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = b;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
b = sym_new_type(ctx, &PyBool_Type);
|
|
stack_pointer[-2] = b;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_TOS_ANY_SET: {
|
|
JitOptRef tos;
|
|
tos = stack_pointer[-1];
|
|
if (sym_matches_type(tos, &PySet_Type) ||
|
|
sym_matches_type(tos, &PyFrozenSet_Type))
|
|
{
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
break;
|
|
}
|
|
|
|
case _CONTAINS_OP_SET: {
|
|
JitOptRef b;
|
|
b = sym_new_type(ctx, &PyBool_Type);
|
|
stack_pointer[-2] = b;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CONTAINS_OP_DICT: {
|
|
JitOptRef b;
|
|
b = sym_new_type(ctx, &PyBool_Type);
|
|
stack_pointer[-2] = b;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CHECK_EG_MATCH: {
|
|
JitOptRef rest;
|
|
JitOptRef match;
|
|
rest = sym_new_not_null(ctx);
|
|
match = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = rest;
|
|
stack_pointer[-1] = match;
|
|
break;
|
|
}
|
|
|
|
case _CHECK_EXC_MATCH: {
|
|
JitOptRef b;
|
|
b = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = b;
|
|
break;
|
|
}
|
|
|
|
case _IMPORT_NAME: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _IMPORT_FROM: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[0] = res;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
/* _POP_JUMP_IF_FALSE is not a viable micro-op for tier 2 */
|
|
|
|
/* _POP_JUMP_IF_TRUE is not a viable micro-op for tier 2 */
|
|
|
|
case _IS_NONE: {
|
|
JitOptRef b;
|
|
b = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = b;
|
|
break;
|
|
}
|
|
|
|
/* _JUMP_BACKWARD_NO_INTERRUPT is not a viable micro-op for tier 2 */
|
|
|
|
case _GET_LEN: {
|
|
JitOptRef obj;
|
|
JitOptRef len;
|
|
obj = stack_pointer[-1];
|
|
Py_ssize_t tuple_length = sym_tuple_length(obj);
|
|
if (tuple_length == -1) {
|
|
len = sym_new_type(ctx, &PyLong_Type);
|
|
}
|
|
else {
|
|
assert(tuple_length >= 0);
|
|
PyObject *temp = PyLong_FromSsize_t(tuple_length);
|
|
if (temp == NULL) {
|
|
goto error;
|
|
}
|
|
if (_Py_IsImmortal(temp)) {
|
|
REPLACE_OP(this_instr, _LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)temp);
|
|
}
|
|
len = sym_new_const(ctx, temp);
|
|
stack_pointer[0] = len;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
Py_DECREF(temp);
|
|
stack_pointer += -1;
|
|
}
|
|
stack_pointer[0] = len;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _MATCH_CLASS: {
|
|
JitOptRef attrs;
|
|
attrs = sym_new_not_null(ctx);
|
|
stack_pointer[-3] = attrs;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _MATCH_MAPPING: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[0] = res;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _MATCH_SEQUENCE: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[0] = res;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _MATCH_KEYS: {
|
|
JitOptRef values_or_none;
|
|
values_or_none = sym_new_not_null(ctx);
|
|
stack_pointer[0] = values_or_none;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GET_ITER: {
|
|
JitOptRef iterable;
|
|
JitOptRef iter;
|
|
JitOptRef index_or_null;
|
|
iterable = stack_pointer[-1];
|
|
if (sym_matches_type(iterable, &PyTuple_Type) || sym_matches_type(iterable, &PyList_Type)) {
|
|
iter = iterable;
|
|
index_or_null = sym_new_not_null(ctx);
|
|
}
|
|
else {
|
|
iter = sym_new_not_null(ctx);
|
|
index_or_null = sym_new_unknown(ctx);
|
|
}
|
|
stack_pointer[-1] = iter;
|
|
stack_pointer[0] = index_or_null;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GET_YIELD_FROM_ITER: {
|
|
JitOptRef iter;
|
|
iter = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = iter;
|
|
break;
|
|
}
|
|
|
|
/* _FOR_ITER is not a viable micro-op for tier 2 */
|
|
|
|
case _FOR_ITER_TIER_TWO: {
|
|
JitOptRef next;
|
|
next = sym_new_not_null(ctx);
|
|
stack_pointer[0] = next;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
/* _INSTRUMENTED_FOR_ITER is not a viable micro-op for tier 2 */
|
|
|
|
case _ITER_CHECK_LIST: {
|
|
break;
|
|
}
|
|
|
|
/* _ITER_JUMP_LIST is not a viable micro-op for tier 2 */
|
|
|
|
case _GUARD_NOT_EXHAUSTED_LIST: {
|
|
break;
|
|
}
|
|
|
|
/* _ITER_NEXT_LIST is not a viable micro-op for tier 2 */
|
|
|
|
case _ITER_NEXT_LIST_TIER_TWO: {
|
|
JitOptRef next;
|
|
next = sym_new_not_null(ctx);
|
|
stack_pointer[0] = next;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _ITER_CHECK_TUPLE: {
|
|
JitOptRef iter;
|
|
iter = stack_pointer[-2];
|
|
if (sym_matches_type(iter, &PyTuple_Type)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_type(iter, &PyTuple_Type);
|
|
break;
|
|
}
|
|
|
|
/* _ITER_JUMP_TUPLE is not a viable micro-op for tier 2 */
|
|
|
|
case _GUARD_NOT_EXHAUSTED_TUPLE: {
|
|
break;
|
|
}
|
|
|
|
case _ITER_NEXT_TUPLE: {
|
|
JitOptRef next;
|
|
next = sym_new_not_null(ctx);
|
|
stack_pointer[0] = next;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _ITER_CHECK_RANGE: {
|
|
break;
|
|
}
|
|
|
|
/* _ITER_JUMP_RANGE is not a viable micro-op for tier 2 */
|
|
|
|
case _GUARD_NOT_EXHAUSTED_RANGE: {
|
|
break;
|
|
}
|
|
|
|
case _ITER_NEXT_RANGE: {
|
|
JitOptRef next;
|
|
next = sym_new_type(ctx, &PyLong_Type);
|
|
stack_pointer[0] = next;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _FOR_ITER_GEN_FRAME: {
|
|
JitOptRef gen_frame;
|
|
gen_frame = PyJitRef_NULL;
|
|
ctx->done = true;
|
|
stack_pointer[0] = gen_frame;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _INSERT_NULL: {
|
|
JitOptRef self;
|
|
JitOptRef *method_and_self;
|
|
self = stack_pointer[-1];
|
|
method_and_self = &stack_pointer[-1];
|
|
method_and_self[0] = sym_new_null(ctx);
|
|
method_and_self[1] = self;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_SPECIAL: {
|
|
JitOptRef *method_and_self;
|
|
method_and_self = &stack_pointer[-2];
|
|
method_and_self[0] = sym_new_not_null(ctx);
|
|
method_and_self[1] = sym_new_unknown(ctx);
|
|
break;
|
|
}
|
|
|
|
case _WITH_EXCEPT_START: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[0] = res;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _PUSH_EXC_INFO: {
|
|
JitOptRef prev_exc;
|
|
JitOptRef new_exc;
|
|
prev_exc = sym_new_not_null(ctx);
|
|
new_exc = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = prev_exc;
|
|
stack_pointer[0] = new_exc;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_DORV_VALUES_INST_ATTR_FROM_DICT: {
|
|
break;
|
|
}
|
|
|
|
case _GUARD_KEYS_VERSION: {
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_METHOD_WITH_VALUES: {
|
|
JitOptRef owner;
|
|
JitOptRef attr;
|
|
JitOptRef self;
|
|
owner = stack_pointer[-1];
|
|
PyObject *descr = (PyObject *)this_instr->operand0;
|
|
(void)descr;
|
|
PyTypeObject *type = sym_get_type(owner);
|
|
PyObject *name = get_co_name(ctx, oparg >> 1);
|
|
attr = lookup_attr(ctx, this_instr, type, name,
|
|
_LOAD_CONST_UNDER_INLINE_BORROW,
|
|
_LOAD_CONST_UNDER_INLINE);
|
|
self = owner;
|
|
stack_pointer[-1] = attr;
|
|
stack_pointer[0] = self;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_METHOD_NO_DICT: {
|
|
JitOptRef owner;
|
|
JitOptRef attr;
|
|
JitOptRef self;
|
|
owner = stack_pointer[-1];
|
|
PyObject *descr = (PyObject *)this_instr->operand0;
|
|
(void)descr;
|
|
PyTypeObject *type = sym_get_type(owner);
|
|
PyObject *name = get_co_name(ctx, oparg >> 1);
|
|
attr = lookup_attr(ctx, this_instr, type, name,
|
|
_LOAD_CONST_UNDER_INLINE_BORROW,
|
|
_LOAD_CONST_UNDER_INLINE);
|
|
self = owner;
|
|
stack_pointer[-1] = attr;
|
|
stack_pointer[0] = self;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_NONDESCRIPTOR_WITH_VALUES: {
|
|
JitOptRef owner;
|
|
JitOptRef attr;
|
|
owner = stack_pointer[-1];
|
|
PyObject *descr = (PyObject *)this_instr->operand0;
|
|
(void)descr;
|
|
PyTypeObject *type = sym_get_type(owner);
|
|
PyObject *name = get_co_name(ctx, oparg >> 1);
|
|
attr = lookup_attr(ctx, this_instr, type, name,
|
|
_POP_TOP_LOAD_CONST_INLINE_BORROW,
|
|
_POP_TOP_LOAD_CONST_INLINE);
|
|
stack_pointer[-1] = attr;
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_NONDESCRIPTOR_NO_DICT: {
|
|
JitOptRef owner;
|
|
JitOptRef attr;
|
|
owner = stack_pointer[-1];
|
|
PyObject *descr = (PyObject *)this_instr->operand0;
|
|
(void)descr;
|
|
PyTypeObject *type = sym_get_type(owner);
|
|
PyObject *name = get_co_name(ctx, oparg >> 1);
|
|
attr = lookup_attr(ctx, this_instr, type, name,
|
|
_POP_TOP_LOAD_CONST_INLINE_BORROW,
|
|
_POP_TOP_LOAD_CONST_INLINE);
|
|
stack_pointer[-1] = attr;
|
|
break;
|
|
}
|
|
|
|
case _CHECK_ATTR_METHOD_LAZY_DICT: {
|
|
break;
|
|
}
|
|
|
|
case _LOAD_ATTR_METHOD_LAZY_DICT: {
|
|
JitOptRef owner;
|
|
JitOptRef attr;
|
|
JitOptRef self;
|
|
owner = stack_pointer[-1];
|
|
PyObject *descr = (PyObject *)this_instr->operand0;
|
|
(void)descr;
|
|
PyTypeObject *type = sym_get_type(owner);
|
|
PyObject *name = get_co_name(ctx, oparg >> 1);
|
|
attr = lookup_attr(ctx, this_instr, type, name,
|
|
_LOAD_CONST_UNDER_INLINE_BORROW,
|
|
_LOAD_CONST_UNDER_INLINE);
|
|
self = owner;
|
|
stack_pointer[-1] = attr;
|
|
stack_pointer[0] = self;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _MAYBE_EXPAND_METHOD: {
|
|
JitOptRef *args;
|
|
JitOptRef self_or_null;
|
|
JitOptRef callable;
|
|
args = &stack_pointer[-oparg];
|
|
self_or_null = stack_pointer[-1 - oparg];
|
|
callable = stack_pointer[-2 - oparg];
|
|
(void)args;
|
|
callable = sym_new_not_null(ctx);
|
|
self_or_null = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = callable;
|
|
stack_pointer[-1 - oparg] = self_or_null;
|
|
break;
|
|
}
|
|
|
|
/* _DO_CALL is not a viable micro-op for tier 2 */
|
|
|
|
/* _MONITOR_CALL is not a viable micro-op for tier 2 */
|
|
|
|
case _PY_FRAME_GENERAL: {
|
|
JitOptRef new_frame;
|
|
assert((this_instr + 2)->opcode == _PUSH_FRAME);
|
|
PyCodeObject *co = get_code_with_logging((this_instr + 2));
|
|
if (co == NULL) {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
new_frame = PyJitRef_Wrap((JitOptSymbol *)frame_new(ctx, co, 0, NULL, 0));
|
|
stack_pointer[-2 - oparg] = new_frame;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CHECK_FUNCTION_VERSION: {
|
|
JitOptRef callable;
|
|
callable = stack_pointer[-2 - oparg];
|
|
uint32_t func_version = (uint32_t)this_instr->operand0;
|
|
if (sym_is_const(ctx, callable) && sym_matches_type(callable, &PyFunction_Type)) {
|
|
assert(PyFunction_Check(sym_get_const(ctx, callable)));
|
|
REPLACE_OP(this_instr, _CHECK_FUNCTION_VERSION_INLINE, 0, func_version);
|
|
this_instr->operand1 = (uintptr_t)sym_get_const(ctx, callable);
|
|
}
|
|
sym_set_type(callable, &PyFunction_Type);
|
|
break;
|
|
}
|
|
|
|
case _CHECK_FUNCTION_VERSION_INLINE: {
|
|
break;
|
|
}
|
|
|
|
case _CHECK_METHOD_VERSION: {
|
|
JitOptRef callable;
|
|
callable = stack_pointer[-2 - oparg];
|
|
uint32_t func_version = (uint32_t)this_instr->operand0;
|
|
if (sym_is_const(ctx, callable) && sym_matches_type(callable, &PyMethod_Type)) {
|
|
PyMethodObject *method = (PyMethodObject *)sym_get_const(ctx, callable);
|
|
assert(PyMethod_Check(method));
|
|
REPLACE_OP(this_instr, _CHECK_FUNCTION_VERSION_INLINE, 0, func_version);
|
|
this_instr->operand1 = (uintptr_t)method->im_func;
|
|
}
|
|
sym_set_type(callable, &PyMethod_Type);
|
|
break;
|
|
}
|
|
|
|
case _EXPAND_METHOD: {
|
|
break;
|
|
}
|
|
|
|
case _CHECK_IS_NOT_PY_CALLABLE: {
|
|
break;
|
|
}
|
|
|
|
case _CALL_NON_PY_GENERAL: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = res;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CHECK_CALL_BOUND_METHOD_EXACT_ARGS: {
|
|
JitOptRef null;
|
|
JitOptRef callable;
|
|
null = stack_pointer[-1 - oparg];
|
|
callable = stack_pointer[-2 - oparg];
|
|
sym_set_null(null);
|
|
sym_set_type(callable, &PyMethod_Type);
|
|
break;
|
|
}
|
|
|
|
case _INIT_CALL_BOUND_METHOD_EXACT_ARGS: {
|
|
JitOptRef self_or_null;
|
|
JitOptRef callable;
|
|
self_or_null = stack_pointer[-1 - oparg];
|
|
callable = stack_pointer[-2 - oparg];
|
|
callable = sym_new_not_null(ctx);
|
|
self_or_null = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = callable;
|
|
stack_pointer[-1 - oparg] = self_or_null;
|
|
break;
|
|
}
|
|
|
|
case _CHECK_PEP_523: {
|
|
if (_PyInterpreterState_GET()->eval_frame == NULL) {
|
|
REPLACE_OP(this_instr, _NOP, 0 ,0);
|
|
}
|
|
break;
|
|
}
|
|
|
|
case _CHECK_FUNCTION_EXACT_ARGS: {
|
|
JitOptRef self_or_null;
|
|
JitOptRef callable;
|
|
self_or_null = stack_pointer[-1 - oparg];
|
|
callable = stack_pointer[-2 - oparg];
|
|
assert(sym_matches_type(callable, &PyFunction_Type));
|
|
if (sym_is_const(ctx, callable)) {
|
|
if (sym_is_null(self_or_null) || sym_is_not_null(self_or_null)) {
|
|
PyFunctionObject *func = (PyFunctionObject *)sym_get_const(ctx, callable);
|
|
PyCodeObject *co = (PyCodeObject *)func->func_code;
|
|
if (co->co_argcount == oparg + !sym_is_null(self_or_null)) {
|
|
REPLACE_OP(this_instr, _NOP, 0 ,0);
|
|
}
|
|
}
|
|
}
|
|
break;
|
|
}
|
|
|
|
case _CHECK_STACK_SPACE: {
|
|
break;
|
|
}
|
|
|
|
case _CHECK_RECURSION_REMAINING: {
|
|
break;
|
|
}
|
|
|
|
case _INIT_CALL_PY_EXACT_ARGS: {
|
|
JitOptRef *args;
|
|
JitOptRef self_or_null;
|
|
JitOptRef new_frame;
|
|
args = &stack_pointer[-oparg];
|
|
self_or_null = stack_pointer[-1 - oparg];
|
|
int argcount = oparg;
|
|
assert((this_instr + 2)->opcode == _PUSH_FRAME);
|
|
PyCodeObject *co = get_code_with_logging((this_instr + 2));
|
|
if (co == NULL) {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
assert(!PyJitRef_IsNull(self_or_null));
|
|
assert(args != NULL);
|
|
if (sym_is_not_null(self_or_null)) {
|
|
args--;
|
|
argcount++;
|
|
}
|
|
if (sym_is_null(self_or_null) || sym_is_not_null(self_or_null)) {
|
|
new_frame = PyJitRef_Wrap((JitOptSymbol *)frame_new(ctx, co, 0, args, argcount));
|
|
} else {
|
|
new_frame = PyJitRef_Wrap((JitOptSymbol *)frame_new(ctx, co, 0, NULL, 0));
|
|
}
|
|
stack_pointer[-2 - oparg] = new_frame;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _PUSH_FRAME: {
|
|
JitOptRef new_frame;
|
|
new_frame = stack_pointer[-1];
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
if (!CURRENT_FRAME_IS_INIT_SHIM()) {
|
|
ctx->frame->stack_pointer = stack_pointer;
|
|
}
|
|
ctx->frame = (_Py_UOpsAbstractFrame *)PyJitRef_Unwrap(new_frame);
|
|
ctx->curr_frame_depth++;
|
|
stack_pointer = ctx->frame->stack_pointer;
|
|
uint64_t operand = this_instr->operand0;
|
|
if (operand == 0) {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
if (!(operand & 1)) {
|
|
PyFunctionObject *func = (PyFunctionObject *)operand;
|
|
ctx->frame->func = func;
|
|
}
|
|
if ((this_instr-1)->opcode == _SAVE_RETURN_OFFSET ||
|
|
(this_instr-1)->opcode == _CREATE_INIT_FRAME) {
|
|
assert((this_instr+1)->opcode == _GUARD_IP__PUSH_FRAME);
|
|
REPLACE_OP(this_instr+1, _NOP, 0, 0);
|
|
}
|
|
break;
|
|
}
|
|
|
|
case _GUARD_NOS_NULL: {
|
|
JitOptRef null;
|
|
null = stack_pointer[-2];
|
|
if (sym_is_null(null)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_null(null);
|
|
break;
|
|
}
|
|
|
|
case _GUARD_NOS_NOT_NULL: {
|
|
JitOptRef nos;
|
|
nos = stack_pointer[-2];
|
|
if (sym_is_not_null(nos)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_non_null(nos);
|
|
break;
|
|
}
|
|
|
|
case _GUARD_THIRD_NULL: {
|
|
JitOptRef null;
|
|
null = stack_pointer[-3];
|
|
if (sym_is_null(null)) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_null(null);
|
|
break;
|
|
}
|
|
|
|
case _GUARD_CALLABLE_TYPE_1: {
|
|
JitOptRef callable;
|
|
callable = stack_pointer[-3];
|
|
if (sym_get_const(ctx, callable) == (PyObject *)&PyType_Type) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_const(callable, (PyObject *)&PyType_Type);
|
|
break;
|
|
}
|
|
|
|
case _CALL_TYPE_1: {
|
|
JitOptRef arg;
|
|
JitOptRef res;
|
|
arg = stack_pointer[-1];
|
|
PyObject* type = (PyObject *)sym_get_type(arg);
|
|
if (type) {
|
|
res = sym_new_const(ctx, type);
|
|
REPLACE_OP(this_instr, _POP_CALL_ONE_LOAD_CONST_INLINE_BORROW, 0,
|
|
(uintptr_t)type);
|
|
}
|
|
else {
|
|
res = sym_new_not_null(ctx);
|
|
}
|
|
stack_pointer[-3] = res;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_CALLABLE_STR_1: {
|
|
JitOptRef callable;
|
|
callable = stack_pointer[-3];
|
|
if (sym_get_const(ctx, callable) == (PyObject *)&PyUnicode_Type) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_const(callable, (PyObject *)&PyUnicode_Type);
|
|
break;
|
|
}
|
|
|
|
case _CALL_STR_1: {
|
|
JitOptRef arg;
|
|
JitOptRef res;
|
|
arg = stack_pointer[-1];
|
|
if (sym_matches_type(arg, &PyUnicode_Type)) {
|
|
res = PyJitRef_StripReferenceInfo(arg);
|
|
}
|
|
else {
|
|
res = sym_new_type(ctx, &PyUnicode_Type);
|
|
}
|
|
stack_pointer[-3] = res;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_CALLABLE_TUPLE_1: {
|
|
JitOptRef callable;
|
|
callable = stack_pointer[-3];
|
|
if (sym_get_const(ctx, callable) == (PyObject *)&PyTuple_Type) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_const(callable, (PyObject *)&PyTuple_Type);
|
|
break;
|
|
}
|
|
|
|
case _CALL_TUPLE_1: {
|
|
JitOptRef arg;
|
|
JitOptRef res;
|
|
arg = stack_pointer[-1];
|
|
if (sym_matches_type(arg, &PyTuple_Type)) {
|
|
res = PyJitRef_StripReferenceInfo(arg);
|
|
}
|
|
else {
|
|
res = sym_new_type(ctx, &PyTuple_Type);
|
|
}
|
|
stack_pointer[-3] = res;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CHECK_AND_ALLOCATE_OBJECT: {
|
|
JitOptRef *args;
|
|
JitOptRef self_or_null;
|
|
JitOptRef callable;
|
|
args = &stack_pointer[-oparg];
|
|
self_or_null = stack_pointer[-1 - oparg];
|
|
callable = stack_pointer[-2 - oparg];
|
|
uint32_t type_version = (uint32_t)this_instr->operand0;
|
|
(void)type_version;
|
|
(void)args;
|
|
callable = sym_new_not_null(ctx);
|
|
self_or_null = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = callable;
|
|
stack_pointer[-1 - oparg] = self_or_null;
|
|
break;
|
|
}
|
|
|
|
case _CREATE_INIT_FRAME: {
|
|
JitOptRef *args;
|
|
JitOptRef self;
|
|
JitOptRef init_frame;
|
|
args = &stack_pointer[-oparg];
|
|
self = stack_pointer[-1 - oparg];
|
|
ctx->frame->stack_pointer = stack_pointer - oparg - 2;
|
|
_Py_UOpsAbstractFrame *shim = frame_new(ctx, (PyCodeObject *)&_Py_InitCleanup, 0, NULL, 0);
|
|
if (shim == NULL) {
|
|
break;
|
|
}
|
|
shim->stack[0] = self;
|
|
shim->stack_pointer++;
|
|
assert((int)(shim->stack_pointer - shim->stack) == 1);
|
|
ctx->frame = shim;
|
|
ctx->curr_frame_depth++;
|
|
assert((this_instr + 1)->opcode == _PUSH_FRAME);
|
|
PyCodeObject *co = get_code_with_logging((this_instr + 1));
|
|
init_frame = PyJitRef_Wrap((JitOptSymbol *)frame_new(ctx, co, 0, args-1, oparg+1));
|
|
stack_pointer[-2 - oparg] = init_frame;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _EXIT_INIT_CHECK: {
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CALL_BUILTIN_CLASS: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = res;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CALL_BUILTIN_O: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = res;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CALL_BUILTIN_FAST: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = res;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CALL_BUILTIN_FAST_WITH_KEYWORDS: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = res;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_CALLABLE_LEN: {
|
|
JitOptRef callable;
|
|
callable = stack_pointer[-3];
|
|
PyObject *len = _PyInterpreterState_GET()->callable_cache.len;
|
|
if (sym_get_const(ctx, callable) == len) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_const(callable, len);
|
|
break;
|
|
}
|
|
|
|
case _CALL_LEN: {
|
|
JitOptRef arg;
|
|
JitOptRef res;
|
|
arg = stack_pointer[-1];
|
|
res = sym_new_type(ctx, &PyLong_Type);
|
|
Py_ssize_t tuple_length = sym_tuple_length(arg);
|
|
if (tuple_length >= 0) {
|
|
PyObject *temp = PyLong_FromSsize_t(tuple_length);
|
|
if (temp == NULL) {
|
|
goto error;
|
|
}
|
|
if (_Py_IsImmortal(temp)) {
|
|
REPLACE_OP(this_instr, _POP_CALL_ONE_LOAD_CONST_INLINE_BORROW,
|
|
0, (uintptr_t)temp);
|
|
}
|
|
res = sym_new_const(ctx, temp);
|
|
stack_pointer[-3] = res;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
Py_DECREF(temp);
|
|
stack_pointer += 2;
|
|
}
|
|
stack_pointer[-3] = res;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_CALLABLE_ISINSTANCE: {
|
|
JitOptRef callable;
|
|
callable = stack_pointer[-4];
|
|
PyObject *isinstance = _PyInterpreterState_GET()->callable_cache.isinstance;
|
|
if (sym_get_const(ctx, callable) == isinstance) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_const(callable, isinstance);
|
|
break;
|
|
}
|
|
|
|
case _CALL_ISINSTANCE: {
|
|
JitOptRef cls;
|
|
JitOptRef instance;
|
|
JitOptRef res;
|
|
cls = stack_pointer[-1];
|
|
instance = stack_pointer[-2];
|
|
res = sym_new_type(ctx, &PyBool_Type);
|
|
PyTypeObject *inst_type = sym_get_type(instance);
|
|
PyTypeObject *cls_o = (PyTypeObject *)sym_get_const(ctx, cls);
|
|
if (inst_type && cls_o && sym_matches_type(cls, &PyType_Type)) {
|
|
PyObject *out = Py_False;
|
|
if (inst_type == cls_o || PyType_IsSubtype(inst_type, cls_o)) {
|
|
out = Py_True;
|
|
}
|
|
sym_set_const(res, out);
|
|
REPLACE_OP(this_instr, _POP_CALL_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)out);
|
|
}
|
|
stack_pointer[-4] = res;
|
|
stack_pointer += -3;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_CALLABLE_LIST_APPEND: {
|
|
JitOptRef callable;
|
|
callable = stack_pointer[-3];
|
|
PyObject *list_append = _PyInterpreterState_GET()->callable_cache.list_append;
|
|
if (sym_get_const(ctx, callable) == list_append) {
|
|
REPLACE_OP(this_instr, _NOP, 0, 0);
|
|
}
|
|
sym_set_const(callable, list_append);
|
|
break;
|
|
}
|
|
|
|
case _CALL_LIST_APPEND: {
|
|
stack_pointer += -3;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CALL_METHOD_DESCRIPTOR_O: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = res;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = res;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CALL_METHOD_DESCRIPTOR_NOARGS: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = res;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CALL_METHOD_DESCRIPTOR_FAST: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2 - oparg] = res;
|
|
stack_pointer += -1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
/* _MONITOR_CALL_KW is not a viable micro-op for tier 2 */
|
|
|
|
case _MAYBE_EXPAND_METHOD_KW: {
|
|
break;
|
|
}
|
|
|
|
/* _DO_CALL_KW is not a viable micro-op for tier 2 */
|
|
|
|
case _PY_FRAME_KW: {
|
|
JitOptRef new_frame;
|
|
assert((this_instr + 2)->opcode == _PUSH_FRAME);
|
|
PyCodeObject *co = get_code_with_logging((this_instr + 2));
|
|
if (co == NULL) {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
new_frame = PyJitRef_Wrap((JitOptSymbol *)frame_new(ctx, co, 0, NULL, 0));
|
|
stack_pointer[-3 - oparg] = new_frame;
|
|
stack_pointer += -2 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CHECK_FUNCTION_VERSION_KW: {
|
|
break;
|
|
}
|
|
|
|
case _CHECK_METHOD_VERSION_KW: {
|
|
break;
|
|
}
|
|
|
|
case _EXPAND_METHOD_KW: {
|
|
break;
|
|
}
|
|
|
|
case _CHECK_IS_NOT_PY_CALLABLE_KW: {
|
|
break;
|
|
}
|
|
|
|
case _CALL_KW_NON_PY: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-3 - oparg] = res;
|
|
stack_pointer += -2 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _MAKE_CALLARGS_A_TUPLE: {
|
|
break;
|
|
}
|
|
|
|
/* _DO_CALL_FUNCTION_EX is not a viable micro-op for tier 2 */
|
|
|
|
case _MAKE_FUNCTION: {
|
|
JitOptRef func;
|
|
func = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = func;
|
|
break;
|
|
}
|
|
|
|
case _SET_FUNCTION_ATTRIBUTE: {
|
|
JitOptRef func_out;
|
|
func_out = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = func_out;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _RETURN_GENERATOR: {
|
|
JitOptRef res;
|
|
ctx->frame->stack_pointer = stack_pointer;
|
|
PyCodeObject *returning_code = get_code_with_logging(this_instr);
|
|
if (returning_code == NULL) {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
_Py_BloomFilter_Add(dependencies, returning_code);
|
|
int returning_stacklevel = this_instr->operand1;
|
|
if (frame_pop(ctx, returning_code, returning_stacklevel)) {
|
|
break;
|
|
}
|
|
stack_pointer = ctx->frame->stack_pointer;
|
|
res = sym_new_unknown(ctx);
|
|
stack_pointer[0] = res;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BUILD_SLICE: {
|
|
JitOptRef slice;
|
|
slice = sym_new_type(ctx, &PySlice_Type);
|
|
stack_pointer[-oparg] = slice;
|
|
stack_pointer += 1 - oparg;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _CONVERT_VALUE: {
|
|
JitOptRef result;
|
|
result = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = result;
|
|
break;
|
|
}
|
|
|
|
case _FORMAT_SIMPLE: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = res;
|
|
break;
|
|
}
|
|
|
|
case _FORMAT_WITH_SPEC: {
|
|
JitOptRef res;
|
|
res = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _COPY: {
|
|
JitOptRef bottom;
|
|
JitOptRef top;
|
|
bottom = stack_pointer[-1 - (oparg-1)];
|
|
assert(oparg > 0);
|
|
top = bottom;
|
|
stack_pointer[0] = top;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _BINARY_OP: {
|
|
JitOptRef rhs;
|
|
JitOptRef lhs;
|
|
JitOptRef res;
|
|
rhs = stack_pointer[-1];
|
|
lhs = stack_pointer[-2];
|
|
if (
|
|
sym_is_safe_const(ctx, lhs) &&
|
|
sym_is_safe_const(ctx, rhs)
|
|
) {
|
|
JitOptRef lhs_sym = lhs;
|
|
JitOptRef rhs_sym = rhs;
|
|
_PyStackRef lhs = sym_get_const_as_stackref(ctx, lhs_sym);
|
|
_PyStackRef rhs = sym_get_const_as_stackref(ctx, rhs_sym);
|
|
_PyStackRef res_stackref;
|
|
/* Start of uop copied from bytecodes for constant evaluation */
|
|
PyObject *lhs_o = PyStackRef_AsPyObjectBorrow(lhs);
|
|
PyObject *rhs_o = PyStackRef_AsPyObjectBorrow(rhs);
|
|
assert(_PyEval_BinaryOps[oparg]);
|
|
PyObject *res_o = _PyEval_BinaryOps[oparg](lhs_o, rhs_o);
|
|
if (res_o == NULL) {
|
|
JUMP_TO_LABEL(error);
|
|
}
|
|
res_stackref = PyStackRef_FromPyObjectSteal(res_o);
|
|
/* End of uop copied from bytecodes for constant evaluation */
|
|
res = sym_new_const_steal(ctx, PyStackRef_AsPyObjectSteal(res_stackref));
|
|
if (sym_is_const(ctx, res)) {
|
|
PyObject *result = sym_get_const(ctx, res);
|
|
if (_Py_IsImmortal(result)) {
|
|
// Replace with _POP_TWO_LOAD_CONST_INLINE_BORROW since we have two inputs and an immortal result
|
|
REPLACE_OP(this_instr, _POP_TWO_LOAD_CONST_INLINE_BORROW, 0, (uintptr_t)result);
|
|
}
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
bool lhs_int = sym_matches_type(lhs, &PyLong_Type);
|
|
bool rhs_int = sym_matches_type(rhs, &PyLong_Type);
|
|
bool lhs_float = sym_matches_type(lhs, &PyFloat_Type);
|
|
bool rhs_float = sym_matches_type(rhs, &PyFloat_Type);
|
|
if (!((lhs_int || lhs_float) && (rhs_int || rhs_float))) {
|
|
res = sym_new_unknown(ctx);
|
|
}
|
|
else if (oparg == NB_POWER || oparg == NB_INPLACE_POWER) {
|
|
if (rhs_float) {
|
|
res = sym_new_unknown(ctx);
|
|
}
|
|
else if (lhs_float) {
|
|
res = sym_new_type(ctx, &PyFloat_Type);
|
|
}
|
|
else if (!sym_is_const(ctx, rhs)) {
|
|
res = sym_new_unknown(ctx);
|
|
}
|
|
else if (_PyLong_IsNegative((PyLongObject *)sym_get_const(ctx, rhs))) {
|
|
res = sym_new_type(ctx, &PyFloat_Type);
|
|
}
|
|
else {
|
|
res = sym_new_type(ctx, &PyLong_Type);
|
|
}
|
|
}
|
|
else if (oparg == NB_TRUE_DIVIDE || oparg == NB_INPLACE_TRUE_DIVIDE) {
|
|
res = sym_new_type(ctx, &PyFloat_Type);
|
|
}
|
|
else if (lhs_int && rhs_int) {
|
|
res = sym_new_type(ctx, &PyLong_Type);
|
|
}
|
|
else {
|
|
res = sym_new_type(ctx, &PyFloat_Type);
|
|
}
|
|
stack_pointer[-2] = res;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _SWAP: {
|
|
JitOptRef top;
|
|
JitOptRef bottom;
|
|
top = stack_pointer[-1];
|
|
bottom = stack_pointer[-2 - (oparg-2)];
|
|
JitOptRef temp = bottom;
|
|
bottom = top;
|
|
top = temp;
|
|
assert(oparg >= 2);
|
|
stack_pointer[-2 - (oparg-2)] = bottom;
|
|
stack_pointer[-1] = top;
|
|
break;
|
|
}
|
|
|
|
/* _INSTRUMENTED_LINE is not a viable micro-op for tier 2 */
|
|
|
|
/* _INSTRUMENTED_INSTRUCTION is not a viable micro-op for tier 2 */
|
|
|
|
/* _INSTRUMENTED_JUMP_FORWARD is not a viable micro-op for tier 2 */
|
|
|
|
/* _MONITOR_JUMP_BACKWARD is not a viable micro-op for tier 2 */
|
|
|
|
/* _INSTRUMENTED_NOT_TAKEN is not a viable micro-op for tier 2 */
|
|
|
|
/* _INSTRUMENTED_POP_JUMP_IF_TRUE is not a viable micro-op for tier 2 */
|
|
|
|
/* _INSTRUMENTED_POP_JUMP_IF_FALSE is not a viable micro-op for tier 2 */
|
|
|
|
/* _INSTRUMENTED_POP_JUMP_IF_NONE is not a viable micro-op for tier 2 */
|
|
|
|
/* _INSTRUMENTED_POP_JUMP_IF_NOT_NONE is not a viable micro-op for tier 2 */
|
|
|
|
case _GUARD_IS_TRUE_POP: {
|
|
JitOptRef flag;
|
|
flag = stack_pointer[-1];
|
|
if (sym_is_const(ctx, flag)) {
|
|
PyObject *value = sym_get_const(ctx, flag);
|
|
assert(value != NULL);
|
|
eliminate_pop_guard(this_instr, value != Py_True);
|
|
}
|
|
sym_set_const(flag, Py_True);
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_IS_FALSE_POP: {
|
|
JitOptRef flag;
|
|
flag = stack_pointer[-1];
|
|
if (sym_is_const(ctx, flag)) {
|
|
PyObject *value = sym_get_const(ctx, flag);
|
|
assert(value != NULL);
|
|
eliminate_pop_guard(this_instr, value != Py_False);
|
|
}
|
|
sym_set_const(flag, Py_False);
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_IS_NONE_POP: {
|
|
JitOptRef val;
|
|
val = stack_pointer[-1];
|
|
if (sym_is_const(ctx, val)) {
|
|
PyObject *value = sym_get_const(ctx, val);
|
|
assert(value != NULL);
|
|
eliminate_pop_guard(this_instr, !Py_IsNone(value));
|
|
}
|
|
else if (sym_has_type(val)) {
|
|
assert(!sym_matches_type(val, &_PyNone_Type));
|
|
eliminate_pop_guard(this_instr, true);
|
|
}
|
|
sym_set_const(val, Py_None);
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _GUARD_IS_NOT_NONE_POP: {
|
|
JitOptRef val;
|
|
val = stack_pointer[-1];
|
|
if (sym_is_const(ctx, val)) {
|
|
PyObject *value = sym_get_const(ctx, val);
|
|
assert(value != NULL);
|
|
eliminate_pop_guard(this_instr, Py_IsNone(value));
|
|
}
|
|
else if (sym_has_type(val)) {
|
|
assert(!sym_matches_type(val, &_PyNone_Type));
|
|
eliminate_pop_guard(this_instr, false);
|
|
}
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _JUMP_TO_TOP: {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
|
|
case _SET_IP: {
|
|
break;
|
|
}
|
|
|
|
case _CHECK_STACK_SPACE_OPERAND: {
|
|
uint32_t framesize = (uint32_t)this_instr->operand0;
|
|
(void)framesize;
|
|
Py_UNREACHABLE();
|
|
break;
|
|
}
|
|
|
|
case _SAVE_RETURN_OFFSET: {
|
|
break;
|
|
}
|
|
|
|
case _EXIT_TRACE: {
|
|
PyObject *exit_p = (PyObject *)this_instr->operand0;
|
|
(void)exit_p;
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
|
|
case _DYNAMIC_EXIT: {
|
|
break;
|
|
}
|
|
|
|
case _CHECK_VALIDITY: {
|
|
break;
|
|
}
|
|
|
|
case _LOAD_CONST_INLINE: {
|
|
JitOptRef value;
|
|
PyObject *ptr = (PyObject *)this_instr->operand0;
|
|
value = sym_new_const(ctx, ptr);
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_TOP_LOAD_CONST_INLINE: {
|
|
JitOptRef value;
|
|
PyObject *ptr = (PyObject *)this_instr->operand0;
|
|
value = sym_new_const(ctx, ptr);
|
|
stack_pointer[-1] = value;
|
|
break;
|
|
}
|
|
|
|
case _LOAD_CONST_INLINE_BORROW: {
|
|
JitOptRef value;
|
|
PyObject *ptr = (PyObject *)this_instr->operand0;
|
|
value = PyJitRef_Borrow(sym_new_const(ctx, ptr));
|
|
stack_pointer[0] = value;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_CALL: {
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_CALL_ONE: {
|
|
stack_pointer += -3;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_CALL_TWO: {
|
|
stack_pointer += -4;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_TOP_LOAD_CONST_INLINE_BORROW: {
|
|
JitOptRef value;
|
|
PyObject *ptr = (PyObject *)this_instr->operand0;
|
|
value = PyJitRef_Borrow(sym_new_const(ctx, ptr));
|
|
stack_pointer[-1] = value;
|
|
break;
|
|
}
|
|
|
|
case _POP_TWO_LOAD_CONST_INLINE_BORROW: {
|
|
JitOptRef value;
|
|
value = sym_new_not_null(ctx);
|
|
stack_pointer[-2] = value;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_CALL_LOAD_CONST_INLINE_BORROW: {
|
|
JitOptRef value;
|
|
PyObject *ptr = (PyObject *)this_instr->operand0;
|
|
value = PyJitRef_Borrow(sym_new_const(ctx, ptr));
|
|
stack_pointer[-2] = value;
|
|
stack_pointer += -1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_CALL_ONE_LOAD_CONST_INLINE_BORROW: {
|
|
JitOptRef value;
|
|
PyObject *ptr = (PyObject *)this_instr->operand0;
|
|
value = PyJitRef_Borrow(sym_new_const(ctx, ptr));
|
|
stack_pointer[-3] = value;
|
|
stack_pointer += -2;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _POP_CALL_TWO_LOAD_CONST_INLINE_BORROW: {
|
|
JitOptRef value;
|
|
PyObject *ptr = (PyObject *)this_instr->operand0;
|
|
value = PyJitRef_Borrow(sym_new_const(ctx, ptr));
|
|
stack_pointer[-4] = value;
|
|
stack_pointer += -3;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_CONST_UNDER_INLINE: {
|
|
JitOptRef value;
|
|
JitOptRef new;
|
|
value = sym_new_not_null(ctx);
|
|
new = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = value;
|
|
stack_pointer[0] = new;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _LOAD_CONST_UNDER_INLINE_BORROW: {
|
|
JitOptRef value;
|
|
JitOptRef new;
|
|
value = sym_new_not_null(ctx);
|
|
new = sym_new_not_null(ctx);
|
|
stack_pointer[-1] = value;
|
|
stack_pointer[0] = new;
|
|
stack_pointer += 1;
|
|
assert(WITHIN_STACK_BOUNDS());
|
|
break;
|
|
}
|
|
|
|
case _START_EXECUTOR: {
|
|
break;
|
|
}
|
|
|
|
case _MAKE_WARM: {
|
|
break;
|
|
}
|
|
|
|
case _FATAL_ERROR: {
|
|
break;
|
|
}
|
|
|
|
case _DEOPT: {
|
|
ctx->done = true;
|
|
break;
|
|
}
|
|
|
|
case _HANDLE_PENDING_AND_DEOPT: {
|
|
break;
|
|
}
|
|
|
|
case _ERROR_POP_N: {
|
|
break;
|
|
}
|
|
|
|
case _TIER2_RESUME_CHECK: {
|
|
break;
|
|
}
|
|
|
|
case _COLD_EXIT: {
|
|
break;
|
|
}
|
|
|
|
case _COLD_DYNAMIC_EXIT: {
|
|
break;
|
|
}
|
|
|
|
case _GUARD_IP__PUSH_FRAME: {
|
|
break;
|
|
}
|
|
|
|
case _GUARD_IP_YIELD_VALUE: {
|
|
break;
|
|
}
|
|
|
|
case _GUARD_IP_RETURN_VALUE: {
|
|
break;
|
|
}
|
|
|
|
case _GUARD_IP_RETURN_GENERATOR: {
|
|
break;
|
|
}
|
|
|