Segmentation fault merging keys of maps containing maps
Cendrb opened this issue · comments
I've noticed an add behavior working with Maps nested as values of a different Map. I merge the keys of the maps and then try to access one of the nested maps. The error is usually PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 99048189440 bytes) in ...
, however I've also seen the process ending with a segmentation fault as well.
The behavior is odd. It seems to depend on the destruction of the resulting merged set of keys:
use Ds\Map;
use Ds\Set;
// Two maps with at least one matching key. Values must be maps
$mapInstance1 = new Map();
$mapInstance2 = new Map();
$mapInstance1->put('test1', new Map());
$mapInstance1->put('test2', new Map());
$mapInstance2->put('test1', new Map());
// this runs fine
$mergedSetOfKeys = $mapInstance1->keys()->merge($mapInstance2->keys());
echo "call 1\n";
$mapInstance1['test1']->keys(); // here I can access the nested map without an issue
echo "call 2\n";
// unassigning the resulting set causes the inner map to no longer work
$mergedSetOfKeys = null;
$mapInstance1['test1']->keys(); // error occurs here, call 3 is never printed
echo "call 3\n";
You can also achieve the same result by just using the single map $mapInstance1
for all the calls instead.
I'm running PHP 8.2.13 on Manjaro 23.1.0 x64. I've also tried the same in the docker image php:8.1.8-apache. The Ds 1.4.0 extension was installed in both cases.
I've reduced this to:
$map = new Map();
$map->put('issue #200', new Map());
$map->keys()->merge($map->keys());
Invalid read of size 4
==206== at 0x56602C: zval_ptr_dtor (in /usr/local/bin/php)
==206== by 0x5B5B61F: ds_htable_clear_buffer.part.0 (ds_htable.c:484)
==206== by 0x5B5C16B: ds_htable_clear_buffer (ds_htable.c:479)
==206== by 0x5B5C16B: ds_htable_free (ds_htable.c:508)
==206== by 0x5B5DF57: ds_set_free (ds_set.c:263)
==206== by 0x5B6352B: php_ds_set_free_object (php_set_handlers.c:60)
==206== by 0x6115EF: zend_objects_store_del (in /usr/local/bin/php)
DS_HTABLE_FOREACH_BUCKET(table, bucket) {
DS_HTABLE_BUCKET_DELETE(bucket);
}
DS_HTABLE_FOREACH_END();
#define DS_HTABLE_BUCKET_DELETE(b) \
DTOR_AND_UNDEF(&(b)->value); \
DTOR_AND_UNDEF(&(b)->key); \
DS_HTABLE_BUCKET_NEXT((b)) = DS_HTABLE_INVALID_INDEX
Removing DTOR_AND_UNDEF(&(b)->value)
hides the issue, so it's related to the destructing of the values in the hash table of the set, which comes from cloning the hash table of the map when ->keys()
is called. As far as I can tell, the code looks good to me. The table clone uses ZVAL_COPY
which increments the refcount correctly to my understanding, so I'm not sure why this bug is happening. I'll need to take a closer look.
Thank you for reporting this.
The problem is here I think:
void ds_htable_put(ds_htable_t *table, zval *key, zval *value)
{
ds_htable_bucket_t *bucket;
// Attempt to find the bucket or initialize it as a new bucket.
bool found = ds_htable_lookup_or_next(table, key, &bucket);
// If found, destruct the current value so that we can replace it.
if (found) {
zval_ptr_dtor(&bucket->value);
}
if (value) {
ZVAL_COPY(&bucket->value, value);
}
}
If found is true but value is NULL, then a stale zval will be in bucket->value
, every further operation will keep destroying an already-destroyed zval.
I think, add ZVAL_UNDEF(&bucket->value);
:
// If found, destruct the current value so that we can replace it.
if (found) {
zval_ptr_dtor(&bucket->value);
ZVAL_UNDEF(&bucket->value);
}
At least this fixes the issue for me.
Filed a PR: #202
Maybe this is already done, but should there be regressions tests added ffor the two issues @nielsdos merged PRs for?
I've added them here and tagged the tests as v1.5.0: php-ds/tests@c16f1ad
Thanks for your work on this!