diff options
author | Olivier Certner <olce.palemoon@certner.fr> | 2021-01-06 11:25:27 +0100 |
---|---|---|
committer | Olivier Certner <olce.palemoon@certner.fr> | 2021-01-07 17:34:03 +0100 |
commit | f76695c1ce032b634f3e0e2a593aebdd1d49703b (patch) | |
tree | 62653f3a74e8c046d154c30e04ae978335ba9b60 | |
parent | da217348d9e7fe1e22df725c3b48a149e7dd9f54 (diff) | |
download | uxp-f76695c1ce032b634f3e0e2a593aebdd1d49703b.tar.gz |
Issue #1699 - Part 3a: mozjemalloc: Memory barriers on 'malloc_initialized'
The barriers are here to make sure that setting 'malloc_initialized' at end of
init must be noticed later by any thread running on a different core. They are
in theory necessary in the absence of an explicit pthread lock.
What could happen is that the thread doing the initialization later spawns
other threads that could not have the updated 'malloc_initialized' value,
leading to a second initialization. This is dependent on whether OSes force a
full memory barrier before the new thread is run, which I don't know, and don't
want to bother.
This was done for FreeBSD only, for the sake of robustness. In theory, this
would be needed on Windows too.
-rw-r--r-- | memory/mozjemalloc/jemalloc.c | 14 |
1 files changed, 12 insertions, 2 deletions
diff --git a/memory/mozjemalloc/jemalloc.c b/memory/mozjemalloc/jemalloc.c index 0eb5241c78..e427ea60de 100644 --- a/memory/mozjemalloc/jemalloc.c +++ b/memory/mozjemalloc/jemalloc.c @@ -4735,7 +4735,7 @@ huge_dalloc(void *ptr) base_node_dealloc(node); } -/* +/* * Platform-specific methods to determine the number of CPUs in a system. * This will be used to determine the desired number of arenas. */ @@ -4841,7 +4841,7 @@ static inline unsigned malloc_ncpus(void) { SYSTEM_INFO info; - + GetSystemInfo(&info); return (info.dwNumberOfProcessors); } @@ -5015,6 +5015,7 @@ malloc_init_hard(void) malloc_mutex_lock(&init_lock); #endif +#ifndef __FreeBSD__ if (malloc_initialized) { /* * Another thread initialized the allocator before this one @@ -5025,6 +5026,11 @@ malloc_init_hard(void) #endif return (false); } +#else + if (__atomic_load_n(&malloc_initialized, __ATOMIC_ACQUIRE)) { + return (false); + } +#endif #ifdef MOZ_MEMORY_WINDOWS /* get a thread local storage index */ @@ -5450,7 +5456,11 @@ MALLOC_OUT: if (chunk_rtree == NULL) return (true); +#ifndef __FreeBSD__ malloc_initialized = true; +#else + __atomic_store_n(&malloc_initialized, true, __ATOMIC_RELEASE); +#endif #if !defined(MOZ_MEMORY_WINDOWS) && !defined(MOZ_MEMORY_DARWIN) /* Prevent potential deadlock on malloc locks after fork. */ |