1 /*
2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
6
7 * Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
8 Note: There may be an updated version of this malloc obtainable at
9 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10 Check before installing!
11
12 * Quickstart
13
14 This library is all in one file to simplify the most common usage:
15 ftp it, compile it (-O3), and link it into another program. All of
16 the compile-time options default to reasonable values for use on
17 most platforms. You might later want to step through various
18 compile-time and dynamic tuning options.
19
20 For convenience, an include file for code using this malloc is at:
21 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
22 You don't really need this .h file unless you call functions not
23 defined in your system include files. The .h file contains only the
24 excerpts from this file needed for using this malloc on ANSI C/C++
25 systems, so long as you haven't changed compile-time options about
26 naming and tuning parameters. If you do, then you can create your
27 own malloc.h that does include all settings by cutting at the point
28 indicated below. Note that you may already by default be using a C
29 library containing a malloc that is based on some version of this
30 malloc (for example in linux). You might still want to use the one
31 in this file to customize settings or to avoid overheads associated
32 with library versions.
33
34 * Vital statistics:
35
36 Supported pointer/size_t representation: 4 or 8 bytes
37 size_t MUST be an unsigned type of the same width as
38 pointers. (If you are using an ancient system that declares
39 size_t as a signed type, or need it to be a different width
40 than pointers, you can use a previous release of this malloc
41 (e.g. 2.7.2) supporting these.)
42
43 Alignment: 8 bytes (minimum)
44 This suffices for nearly all current machines and C compilers.
45 However, you can define MALLOC_ALIGNMENT to be wider than this
46 if necessary (up to 128bytes), at the expense of using more space.
47
48 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
49 8 or 16 bytes (if 8byte sizes)
50 Each malloced chunk has a hidden word of overhead holding size
51 and status information, and additional cross-check word
52 if FOOTERS is defined.
53
54 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
55 8-byte ptrs: 32 bytes (including overhead)
56
57 Even a request for zero bytes (i.e., malloc(0)) returns a
58 pointer to something of the minimum allocatable size.
59 The maximum overhead wastage (i.e., number of extra bytes
60 allocated than were requested in malloc) is less than or equal
61 to the minimum size, except for requests >= mmap_threshold that
62 are serviced via mmap(), where the worst case wastage is about
63 32 bytes plus the remainder from a system page (the minimal
64 mmap unit); typically 4096 or 8192 bytes.
65
66 Security: static-safe; optionally more or less
67 The "security" of malloc refers to the ability of malicious
68 code to accentuate the effects of errors (for example, freeing
69 space that is not currently malloc'ed or overwriting past the
70 ends of chunks) in code that calls malloc. This malloc
71 guarantees not to modify any memory locations below the base of
72 heap, i.e., static variables, even in the presence of usage
73 errors. The routines additionally detect most improper frees
74 and reallocs. All this holds as long as the static bookkeeping
75 for malloc itself is not corrupted by some other means. This
76 is only one aspect of security -- these checks do not, and
77 cannot, detect all possible programming errors.
78
79 If FOOTERS is defined nonzero, then each allocated chunk
80 carries an additional check word to verify that it was malloced
81 from its space. These check words are the same within each
82 execution of a program using malloc, but differ across
83 executions, so externally crafted fake chunks cannot be
84 freed. This improves security by rejecting frees/reallocs that
85 could corrupt heap memory, in addition to the checks preventing
86 writes to statics that are always on. This may further improve
87 security at the expense of time and space overhead. (Note that
88 FOOTERS may also be worth using with MSPACES.)
89
90 By default detected errors cause the program to abort (calling
91 "abort()"). You can override this to instead proceed past
92 errors by defining PROCEED_ON_ERROR. In this case, a bad free
93 has no effect, and a malloc that encounters a bad address
94 caused by user overwrites will ignore the bad address by
95 dropping pointers and indices to all known memory. This may
96 be appropriate for programs that should continue if at all
97 possible in the face of programming errors, although they may
98 run out of memory because dropped memory is never reclaimed.
99
100 If you don't like either of these options, you can define
101 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102 else. And if if you are sure that your program using malloc has
103 no errors or vulnerabilities, you can define INSECURE to 1,
104 which might (or might not) provide a small performance improvement.
105
106 It is also possible to limit the maximum total allocatable
107 space, using malloc_set_footprint_limit. This is not
108 designed as a security feature in itself (calls to set limits
109 are not screened or privileged), but may be useful as one
110 aspect of a secure implementation.
111
112 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
113 When USE_LOCKS is defined, each public call to malloc, free,
114 etc is surrounded with a lock. By default, this uses a plain
115 pthread mutex, win32 critical section, or a spin-lock if if
116 available for the platform and not disabled by setting
117 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
118 recursive versions are used instead (which are not required for
119 base functionality but may be needed in layered extensions).
120 Using a global lock is not especially fast, and can be a major
121 bottleneck. It is designed only to provide minimal protection
122 in concurrent environments, and to provide a basis for
123 extensions. If you are using malloc in a concurrent program,
124 consider instead using nedmalloc
125 (http://www.nedprod.com/programs/portable/nedmalloc/) or
126 ptmalloc (See http://www.malloc.de), which are derived from
127 versions of this malloc.
128
129 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
130 This malloc can use unix sbrk or any emulation (invoked using
131 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
132 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
133 memory. On most unix systems, it tends to work best if both
134 MORECORE and MMAP are enabled. On Win32, it uses emulations
135 based on VirtualAlloc. It also uses common C library functions
136 like memset.
137
138 Compliance: I believe it is compliant with the Single Unix Specification
139 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
140 others as well.
141
142 * Overview of algorithms
143
144 This is not the fastest, most space-conserving, most portable, or
145 most tunable malloc ever written. However it is among the fastest
146 while also being among the most space-conserving, portable and
147 tunable. Consistent balance across these factors results in a good
148 general-purpose allocator for malloc-intensive programs.
149
150 In most ways, this malloc is a best-fit allocator. Generally, it
151 chooses the best-fitting existing chunk for a request, with ties
152 broken in approximately least-recently-used order. (This strategy
153 normally maintains low fragmentation.) However, for requests less
154 than 256bytes, it deviates from best-fit when there is not an
155 exactly fitting available chunk by preferring to use space adjacent
156 to that used for the previous small request, as well as by breaking
157 ties in approximately most-recently-used order. (These enhance
158 locality of series of small allocations.) And for very large requests
159 (>= 256Kb by default), it relies on system memory mapping
160 facilities, if supported. (This helps avoid carrying around and
161 possibly fragmenting memory used only for large chunks.)
162
163 All operations (except malloc_stats and mallinfo) have execution
164 times that are bounded by a constant factor of the number of bits in
165 a size_t, not counting any clearing in calloc or copying in realloc,
166 or actions surrounding MORECORE and MMAP that have times
167 proportional to the number of non-contiguous regions returned by
168 system allocation routines, which is often just 1. In real-time
169 applications, you can optionally suppress segment traversals using
170 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
171 system allocators return non-contiguous spaces, at the typical
172 expense of carrying around more memory and increased fragmentation.
173
174 The implementation is not very modular and seriously overuses
175 macros. Perhaps someday all C compilers will do as good a job
176 inlining modular code as can now be done by brute-force expansion,
177 but now, enough of them seem not to.
178
179 Some compilers issue a lot of warnings about code that is
180 dead/unreachable only on some platforms, and also about intentional
181 uses of negation on unsigned types. All known cases of each can be
182 ignored.
183
184 For a longer but out of date high-level description, see
185 http://gee.cs.oswego.edu/dl/html/malloc.html
186
187 * MSPACES
188 If MSPACES is defined, then in addition to malloc, free, etc.,
189 this file also defines mspace_malloc, mspace_free, etc. These
190 are versions of malloc routines that take an "mspace" argument
191 obtained using create_mspace, to control all internal bookkeeping.
192 If ONLY_MSPACES is defined, only these versions are compiled.
193 So if you would like to use this allocator for only some allocations,
194 and your system malloc for others, you can compile with
195 ONLY_MSPACES and then do something like...
196 static mspace mymspace = create_mspace(0,0); // for example
197 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
198
199 (Note: If you only need one instance of an mspace, you can instead
200 use "USE_DL_PREFIX" to relabel the global malloc.)
201
202 You can similarly create thread-local allocators by storing
203 mspaces as thread-locals. For example:
204 static __thread mspace tlms = 0;
205 void* tlmalloc(size_t bytes) {
206 if (tlms == 0) tlms = create_mspace(0, 0);
207 return mspace_malloc(tlms, bytes);
208 }
209 void tlfree(void* mem) { mspace_free(tlms, mem); }
210
211 Unless FOOTERS is defined, each mspace is completely independent.
212 You cannot allocate from one and free to another (although
213 conformance is only weakly checked, so usage errors are not always
214 caught). If FOOTERS is defined, then each chunk carries around a tag
215 indicating its originating mspace, and frees are directed to their
216 originating spaces. Normally, this requires use of locks.
217
218 ------------------------- Compile-time options ---------------------------
219
220 Be careful in setting #define values for numerical constants of type
221 size_t. On some systems, literal values are not automatically extended
222 to size_t precision unless they are explicitly casted. You can also
223 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
224
225 WIN32 default: defined if _WIN32 defined
226 Defining WIN32 sets up defaults for MS environment and compilers.
227 Otherwise defaults are for unix. Beware that there seem to be some
228 cases where this malloc might not be a pure drop-in replacement for
229 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
230 SetDIBits()) may be due to bugs in some video driver implementations
231 when pixel buffers are malloc()ed, and the region spans more than
232 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
233 default granularity, pixel buffers may straddle virtual allocation
234 regions more often than when using the Microsoft allocator. You can
235 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
236 buffers rather than using malloc(). If this is not possible,
237 recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
238 in cases where MSC and gcc (cygwin) are known to differ on WIN32,
239 conditions use _MSC_VER to distinguish them.
240
241 DLMALLOC_EXPORT default: extern
242 Defines how public APIs are declared. If you want to export via a
243 Windows DLL, you might define this as
244 #define DLMALLOC_EXPORT extern __declspec(dllexport)
245 If you want a POSIX ELF shared object, you might use
246 #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
247
248 MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))
249 Controls the minimum alignment for malloc'ed chunks. It must be a
250 power of two and at least 8, even on machines for which smaller
251 alignments would suffice. It may be defined as larger than this
252 though. Note however that code and data structures are optimized for
253 the case of 8-byte alignment.
254
255 MSPACES default: 0 (false)
256 If true, compile in support for independent allocation spaces.
257 This is only supported if HAVE_MMAP is true.
258
259 ONLY_MSPACES default: 0 (false)
260 If true, only compile in mspace versions, not regular versions.
261
262 USE_LOCKS default: 0 (false)
263 Causes each call to each public routine to be surrounded with
264 pthread or WIN32 mutex lock/unlock. (If set true, this can be
265 overridden on a per-mspace basis for mspace versions.) If set to a
266 non-zero value other than 1, locks are used, but their
267 implementation is left out, so lock functions must be supplied manually,
268 as described below.
269
270 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
271 If true, uses custom spin locks for locking. This is currently
272 supported only gcc >= 4.1, older gccs on x86 platforms, and recent
273 MS compilers. Otherwise, posix locks or win32 critical sections are
274 used.
275
276 USE_RECURSIVE_LOCKS default: not defined
277 If defined nonzero, uses recursive (aka reentrant) locks, otherwise
278 uses plain mutexes. This is not required for malloc proper, but may
279 be needed for layered allocators such as nedmalloc.
280
281 LOCK_AT_FORK default: not defined
282 If defined nonzero, performs pthread_atfork upon initialization
283 to initialize child lock while holding parent lock. The implementation
284 assumes that pthread locks (not custom locks) are being used. In other
285 cases, you may need to customize the implementation.
286
287 FOOTERS default: 0
288 If true, provide extra checking and dispatching by placing
289 information in the footers of allocated chunks. This adds
290 space and time overhead.
291
292 INSECURE default: 0
293 If true, omit checks for usage errors and heap space overwrites.
294
295 USE_DL_PREFIX default: NOT defined
296 Causes compiler to prefix all public routines with the string 'dl'.
297 This can be useful when you only want to use this malloc in one part
298 of a program, using your regular system malloc elsewhere.
299
300 MALLOC_INSPECT_ALL default: NOT defined
301 If defined, compiles malloc_inspect_all and mspace_inspect_all, that
302 perform traversal of all heap space. Unless access to these
303 functions is otherwise restricted, you probably do not want to
304 include them in secure implementations.
305
306 ABORT default: defined as abort()
307 Defines how to abort on failed checks. On most systems, a failed
308 check cannot die with an "assert" or even print an informative
309 message, because the underlying print routines in turn call malloc,
310 which will fail again. Generally, the best policy is to simply call
311 abort(). It's not very useful to do more than this because many
312 errors due to overwriting will show up as address faults (null, odd
313 addresses etc) rather than malloc-triggered checks, so will also
314 abort. Also, most compilers know that abort() does not return, so
315 can better optimize code conditionally calling it.
316
317 PROCEED_ON_ERROR default: defined as 0 (false)
318 Controls whether detected bad addresses cause them to bypassed
319 rather than aborting. If set, detected bad arguments to free and
320 realloc are ignored. And all bookkeeping information is zeroed out
321 upon a detected overwrite of freed heap space, thus losing the
322 ability to ever return it from malloc again, but enabling the
323 application to proceed. If PROCEED_ON_ERROR is defined, the
324 static variable malloc_corruption_error_count is compiled in
325 and can be examined to see if errors have occurred. This option
326 generates slower code than the default abort policy.
327
328 DEBUG default: NOT defined
329 The DEBUG setting is mainly intended for people trying to modify
330 this code or diagnose problems when porting to new platforms.
331 However, it may also be able to better isolate user errors than just
332 using runtime checks. The assertions in the check routines spell
333 out in more detail the assumptions and invariants underlying the
334 algorithms. The checking is fairly extensive, and will slow down
335 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
336 set will attempt to check every non-mmapped allocated and free chunk
337 in the course of computing the summaries.
338
339 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
340 Debugging assertion failures can be nearly impossible if your
341 version of the assert macro causes malloc to be called, which will
342 lead to a cascade of further failures, blowing the runtime stack.
343 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
344 which will usually make debugging easier.
345
346 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
347 The action to take before "return 0" when malloc fails to be able to
348 return memory because there is none available.
349
350 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
351 True if this system supports sbrk or an emulation of it.
352
353 MORECORE default: sbrk
354 The name of the sbrk-style system routine to call to obtain more
355 memory. See below for guidance on writing custom MORECORE
356 functions. The type of the argument to sbrk/MORECORE varies across
357 systems. It cannot be size_t, because it supports negative
358 arguments, so it is normally the signed type of the same width as
359 size_t (sometimes declared as "intptr_t"). It doesn't much matter
360 though. Internally, we only call it with arguments less than half
361 the max value of a size_t, which should work across all reasonable
362 possibilities, although sometimes generating compiler warnings.
363
364 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
365 If true, take advantage of fact that consecutive calls to MORECORE
366 with positive arguments always return contiguous increasing
367 addresses. This is true of unix sbrk. It does not hurt too much to
368 set it true anyway, since malloc copes with non-contiguities.
369 Setting it false when definitely non-contiguous saves time
370 and possibly wasted space it would take to discover this though.
371
372 MORECORE_CANNOT_TRIM default: NOT defined
373 True if MORECORE cannot release space back to the system when given
374 negative arguments. This is generally necessary only if you are
375 using a hand-crafted MORECORE function that cannot handle negative
376 arguments.
377
378 NO_SEGMENT_TRAVERSAL default: 0
379 If non-zero, suppresses traversals of memory segments
380 returned by either MORECORE or CALL_MMAP. This disables
381 merging of segments that are contiguous, and selectively
382 releasing them to the OS if unused, but bounds execution times.
383
384 HAVE_MMAP default: 1 (true)
385 True if this system supports mmap or an emulation of it. If so, and
386 HAVE_MORECORE is not true, MMAP is used for all system
387 allocation. If set and HAVE_MORECORE is true as well, MMAP is
388 primarily used to directly allocate very large blocks. It is also
389 used as a backup strategy in cases where MORECORE fails to provide
390 space from system. Note: A single call to MUNMAP is assumed to be
391 able to unmap memory that may have be allocated using multiple calls
392 to MMAP, so long as they are adjacent.
393
394 HAVE_MREMAP default: 1 on linux, else 0
395 If true realloc() uses mremap() to re-allocate large blocks and
396 extend or shrink allocation spaces.
397
398 MMAP_CLEARS default: 1 except on WINCE.
399 True if mmap clears memory so calloc doesn't need to. This is true
400 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
401
402 USE_BUILTIN_FFS default: 0 (i.e., not used)
403 Causes malloc to use the builtin ffs() function to compute indices.
404 Some compilers may recognize and intrinsify ffs to be faster than the
405 supplied C version. Also, the case of x86 using gcc is special-cased
406 to an asm instruction, so is already as fast as it can be, and so
407 this setting has no effect. Similarly for Win32 under recent MS compilers.
408 (On most x86s, the asm version is only slightly faster than the C version.)
409
410 malloc_getpagesize default: derive from system includes, or 4096.
411 The system page size. To the extent possible, this malloc manages
412 memory from the system in page-size units. This may be (and
413 usually is) a function rather than a constant. This is ignored
414 if WIN32, where page size is determined using getSystemInfo during
415 initialization.
416
417 USE_DEV_RANDOM default: 0 (i.e., not used)
418 Causes malloc to use /dev/random to initialize secure magic seed for
419 stamping footers. Otherwise, the current time is used.
420
421 NO_MALLINFO default: 0
422 If defined, don't compile "mallinfo". This can be a simple way
423 of dealing with mismatches between system declarations and
424 those in this file.
425
426 MALLINFO_FIELD_TYPE default: size_t
427 The type of the fields in the mallinfo struct. This was originally
428 defined as "int" in SVID etc, but is more usefully defined as
429 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
430
431 NO_MALLOC_STATS default: 0
432 If defined, don't compile "malloc_stats". This avoids calls to
433 fprintf and bringing in stdio dependencies you might not want.
434
435 REALLOC_ZERO_BYTES_FREES default: not defined
436 This should be set if a call to realloc with zero bytes should
437 be the same as a call to free. Some people think it should. Otherwise,
438 since this malloc returns a unique pointer for malloc(0), so does
439 realloc(p, 0).
440
441 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
442 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
443 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
444 Define these if your system does not have these header files.
445 You might need to manually insert some of the declarations they provide.
446
447 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
448 system_info.dwAllocationGranularity in WIN32,
449 otherwise 64K.
450 Also settable using mallopt(M_GRANULARITY, x)
451 The unit for allocating and deallocating memory from the system. On
452 most systems with contiguous MORECORE, there is no reason to
453 make this more than a page. However, systems with MMAP tend to
454 either require or encourage larger granularities. You can increase
455 this value to prevent system allocation functions to be called so
456 often, especially if they are slow. The value must be at least one
457 page and must be a power of two. Setting to 0 causes initialization
458 to either page size or win32 region size. (Note: In previous
459 versions of malloc, the equivalent of this option was called
460 "TOP_PAD")
461
462 DEFAULT_TRIM_THRESHOLD default: 2MB
463 Also settable using mallopt(M_TRIM_THRESHOLD, x)
464 The maximum amount of unused top-most memory to keep before
465 releasing via malloc_trim in free(). Automatic trimming is mainly
466 useful in long-lived programs using contiguous MORECORE. Because
467 trimming via sbrk can be slow on some systems, and can sometimes be
468 wasteful (in cases where programs immediately afterward allocate
469 more large chunks) the value should be high enough so that your
470 overall system performance would improve by releasing this much
471 memory. As a rough guide, you might set to a value close to the
472 average size of a process (program) running on your system.
473 Releasing this much memory would allow such a process to run in
474 memory. Generally, it is worth tuning trim thresholds when a
475 program undergoes phases where several large chunks are allocated
476 and released in ways that can reuse each other's storage, perhaps
477 mixed with phases where there are no such chunks at all. The trim
478 value must be greater than page size to have any useful effect. To
479 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
480 some people use of mallocing a huge space and then freeing it at
481 program startup, in an attempt to reserve system memory, doesn't
482 have the intended effect under automatic trimming, since that memory
483 will immediately be returned to the system.
484
485 DEFAULT_MMAP_THRESHOLD default: 256K
486 Also settable using mallopt(M_MMAP_THRESHOLD, x)
487 The request size threshold for using MMAP to directly service a
488 request. Requests of at least this size that cannot be allocated
489 using already-existing space will be serviced via mmap. (If enough
490 normal freed space already exists it is used instead.) Using mmap
491 segregates relatively large chunks of memory so that they can be
492 individually obtained and released from the host system. A request
493 serviced through mmap is never reused by any other request (at least
494 not directly; the system may just so happen to remap successive
495 requests to the same locations). Segregating space in this way has
496 the benefits that: Mmapped space can always be individually released
497 back to the system, which helps keep the system level memory demands
498 of a long-lived program low. Also, mapped memory doesn't become
499 `locked' between other chunks, as can happen with normally allocated
500 chunks, which means that even trimming via malloc_trim would not
501 release them. However, it has the disadvantage that the space
502 cannot be reclaimed, consolidated, and then used to service later
503 requests, as happens with normal chunks. The advantages of mmap
504 nearly always outweigh disadvantages for "large" chunks, but the
505 value of "large" may vary across systems. The default is an
506 empirically derived value that works well in most systems. You can
507 disable mmap by setting to MAX_SIZE_T.
508
509 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
510 The number of consolidated frees between checks to release
511 unused segments when freeing. When using non-contiguous segments,
512 especially with multiple mspaces, checking only for topmost space
513 doesn't always suffice to trigger trimming. To compensate for this,
514 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
515 current number of segments, if greater) try to release unused
516 segments to the OS when freeing chunks that result in
517 consolidation. The best value for this parameter is a compromise
518 between slowing down frees with relatively costly checks that
519 rarely trigger versus holding on to unused memory. To effectively
520 disable, set to MAX_SIZE_T. This may lead to a very slight speed
521 improvement at the expense of carrying around more memory.
522 */
523
524 /* Customizations { */
525
526 #define HAVE_MMAP 1
527 #define ONLY_MSPACES 1
528 #define HAVE_MREMAP 0
529 #define NO_MALLOC_STATS 1
530 #define USE_DL_PREFIX 1
531 #define USE_LOCKS 1
532 #define USE_SPIN_LOCKS 1
533 #define malloc_getpagesize ((size_t)4096U)
534 /* } Customizations */
535
536 /* Version identifier to allow people to support multiple versions */
537 #ifndef DLMALLOC_VERSION
538 #define DLMALLOC_VERSION 20806
539 #endif /* DLMALLOC_VERSION */
540
541 #ifndef DLMALLOC_EXPORT
542 #define DLMALLOC_EXPORT extern
543 #endif
544
545 #ifndef WIN32
546 #ifdef _WIN32
547 #define WIN32 1
548 #endif /* _WIN32 */
549 #ifdef _WIN32_WCE
550 #define LACKS_FCNTL_H
551 #define WIN32 1
552 #endif /* _WIN32_WCE */
553 #endif /* WIN32 */
554 #ifdef WIN32
555 #define WIN32_LEAN_AND_MEAN
556 #include <windows.h>
557 #include <tchar.h>
558 #define HAVE_MMAP 1
559 #define HAVE_MORECORE 0
560 #define LACKS_UNISTD_H
561 #define LACKS_SYS_PARAM_H
562 #define LACKS_SYS_MMAN_H
563 #define LACKS_STRING_H
564 #define LACKS_STRINGS_H
565 #define LACKS_SYS_TYPES_H
566 #define LACKS_ERRNO_H
567 #define LACKS_SCHED_H
568 #ifndef MALLOC_FAILURE_ACTION
569 #define MALLOC_FAILURE_ACTION
570 #endif /* MALLOC_FAILURE_ACTION */
571 #ifndef MMAP_CLEARS
572 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
573 #define MMAP_CLEARS 0
574 #else
575 #define MMAP_CLEARS 1
576 #endif /* _WIN32_WCE */
577 #endif /*MMAP_CLEARS */
578 #endif /* WIN32 */
579
580 #if defined(DARWIN) || defined(_DARWIN)
581 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
582 #ifndef HAVE_MORECORE
583 #define HAVE_MORECORE 0
584 #define HAVE_MMAP 1
585 /* OSX allocators provide 16 byte alignment */
586 #ifndef MALLOC_ALIGNMENT
587 #define MALLOC_ALIGNMENT ((size_t)16U)
588 #endif
589 #endif /* HAVE_MORECORE */
590 #endif /* DARWIN */
591
592 #ifndef LACKS_SYS_TYPES_H
593 #include <sys/types_internal.h> /* For size_t */
594 #endif /* LACKS_SYS_TYPES_H */
595
596 /* The maximum possible size_t value has all bits set */
597 #define MAX_SIZE_T (~(size_t)0)
598
599 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
600 #define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
601 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
602 #endif /* USE_LOCKS */
603
604 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
605 #if ((defined(__GNUC__) && \
606 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
607 defined(__i386__) || defined(__x86_64__))) || \
608 (defined(_MSC_VER) && _MSC_VER>=1310))
609 #ifndef USE_SPIN_LOCKS
610 #define USE_SPIN_LOCKS 1
611 #endif /* USE_SPIN_LOCKS */
612 #elif USE_SPIN_LOCKS
613 #error "USE_SPIN_LOCKS defined without implementation"
614 #endif /* ... locks available... */
615 #elif !defined(USE_SPIN_LOCKS)
616 #define USE_SPIN_LOCKS 0
617 #endif /* USE_LOCKS */
618
619 #ifndef ONLY_MSPACES
620 #define ONLY_MSPACES 0
621 #endif /* ONLY_MSPACES */
622 #ifndef MSPACES
623 #if ONLY_MSPACES
624 #define MSPACES 1
625 #else /* ONLY_MSPACES */
626 #define MSPACES 0
627 #endif /* ONLY_MSPACES */
628 #endif /* MSPACES */
629 #ifndef MALLOC_ALIGNMENT
630 #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
631 #endif /* MALLOC_ALIGNMENT */
632 #ifndef FOOTERS
633 #define FOOTERS 0
634 #endif /* FOOTERS */
635 #ifndef ABORT
636 #define ABORT abort()
637 #endif /* ABORT */
638 #ifndef ABORT_ON_ASSERT_FAILURE
639 #define ABORT_ON_ASSERT_FAILURE 1
640 #endif /* ABORT_ON_ASSERT_FAILURE */
641 #ifndef PROCEED_ON_ERROR
642 #define PROCEED_ON_ERROR 0
643 #endif /* PROCEED_ON_ERROR */
644
645 #ifndef INSECURE
646 #define INSECURE 0
647 #endif /* INSECURE */
648 #ifndef MALLOC_INSPECT_ALL
649 #define MALLOC_INSPECT_ALL 0
650 #endif /* MALLOC_INSPECT_ALL */
651 #ifndef HAVE_MMAP
652 #define HAVE_MMAP 1
653 #endif /* HAVE_MMAP */
654 #ifndef MMAP_CLEARS
655 #define MMAP_CLEARS 1
656 #endif /* MMAP_CLEARS */
657 #ifndef HAVE_MREMAP
658 #ifdef linux
659 #define HAVE_MREMAP 1
660 #define _GNU_SOURCE /* Turns on mremap() definition */
661 #else /* linux */
662 #define HAVE_MREMAP 0
663 #endif /* linux */
664 #endif /* HAVE_MREMAP */
665 #ifndef MALLOC_FAILURE_ACTION
666 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
667 #endif /* MALLOC_FAILURE_ACTION */
668
669 #ifndef HAVE_MORECORE
670 #if ONLY_MSPACES
671 #define HAVE_MORECORE 0
672 #else /* ONLY_MSPACES */
673 #define HAVE_MORECORE 1
674 #endif /* ONLY_MSPACES */
675 #endif /* HAVE_MORECORE */
676 #if !HAVE_MORECORE
677 #define MORECORE_CONTIGUOUS 0
678 #else /* !HAVE_MORECORE */
679 #define MORECORE_DEFAULT sbrk
680 #ifndef MORECORE_CONTIGUOUS
681 #define MORECORE_CONTIGUOUS 1
682 #endif /* MORECORE_CONTIGUOUS */
683 #endif /* HAVE_MORECORE */
684 #ifndef DEFAULT_GRANULARITY
685 #if (MORECORE_CONTIGUOUS || defined(WIN32))
686 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
687 #else /* MORECORE_CONTIGUOUS */
688 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
689 #endif /* MORECORE_CONTIGUOUS */
690 #endif /* DEFAULT_GRANULARITY */
691 #ifndef DEFAULT_TRIM_THRESHOLD
692 #ifndef MORECORE_CANNOT_TRIM
693 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
694 #else /* MORECORE_CANNOT_TRIM */
695 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
696 #endif /* MORECORE_CANNOT_TRIM */
697 #endif /* DEFAULT_TRIM_THRESHOLD */
698 #ifndef DEFAULT_MMAP_THRESHOLD
699 #if HAVE_MMAP
700 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
701 #else /* HAVE_MMAP */
702 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
703 #endif /* HAVE_MMAP */
704 #endif /* DEFAULT_MMAP_THRESHOLD */
705 #ifndef MAX_RELEASE_CHECK_RATE
706 #if HAVE_MMAP
707 #define MAX_RELEASE_CHECK_RATE 4095
708 #else
709 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
710 #endif /* HAVE_MMAP */
711 #endif /* MAX_RELEASE_CHECK_RATE */
712 #ifndef USE_BUILTIN_FFS
713 #define USE_BUILTIN_FFS 0
714 #endif /* USE_BUILTIN_FFS */
715 #ifndef USE_DEV_RANDOM
716 #define USE_DEV_RANDOM 0
717 #endif /* USE_DEV_RANDOM */
718 #ifndef NO_MALLINFO
719 #define NO_MALLINFO 0
720 #endif /* NO_MALLINFO */
721 #ifndef MALLINFO_FIELD_TYPE
722 #define MALLINFO_FIELD_TYPE size_t
723 #endif /* MALLINFO_FIELD_TYPE */
724 #ifndef NO_MALLOC_STATS
725 #define NO_MALLOC_STATS 0
726 #endif /* NO_MALLOC_STATS */
727 #ifndef NO_SEGMENT_TRAVERSAL
728 #define NO_SEGMENT_TRAVERSAL 0
729 #endif /* NO_SEGMENT_TRAVERSAL */
730
731 /*
732 mallopt tuning options. SVID/XPG defines four standard parameter
733 numbers for mallopt, normally defined in malloc.h. None of these
734 are used in this malloc, so setting them has no effect. But this
735 malloc does support the following options.
736 */
737
738 #define M_TRIM_THRESHOLD (-1)
739 #define M_GRANULARITY (-2)
740 #define M_MMAP_THRESHOLD (-3)
741
742 /* ------------------------ Mallinfo declarations ------------------------ */
743
744 #if !NO_MALLINFO
745 /*
746 This version of malloc supports the standard SVID/XPG mallinfo
747 routine that returns a struct containing usage properties and
748 statistics. It should work on any system that has a
749 /usr/include/malloc.h defining struct mallinfo. The main
750 declaration needed is the mallinfo struct that is returned (by-copy)
751 by mallinfo(). The malloinfo struct contains a bunch of fields that
752 are not even meaningful in this version of malloc. These fields are
753 are instead filled by mallinfo() with other numbers that might be of
754 interest.
755
756 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
757 /usr/include/malloc.h file that includes a declaration of struct
758 mallinfo. If so, it is included; else a compliant version is
759 declared below. These must be precisely the same for mallinfo() to
760 work. The original SVID version of this struct, defined on most
761 systems with mallinfo, declares all fields as ints. But some others
762 define as unsigned long. If your system defines the fields using a
763 type of different width than listed here, you MUST #include your
764 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
765 */
766
767 /* #define HAVE_USR_INCLUDE_MALLOC_H */
768
769 #ifdef HAVE_USR_INCLUDE_MALLOC_H
770 #include "/usr/include/malloc.h"
771 #else /* HAVE_USR_INCLUDE_MALLOC_H */
772 #ifndef STRUCT_MALLINFO_DECLARED
773 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
774 #define _STRUCT_MALLINFO
775 #define STRUCT_MALLINFO_DECLARED 1
776 struct mallinfo {
777 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
778 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
779 MALLINFO_FIELD_TYPE smblks; /* always 0 */
780 MALLINFO_FIELD_TYPE hblks; /* always 0 */
781 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
782 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
783 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
784 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
785 MALLINFO_FIELD_TYPE fordblks; /* total free space */
786 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
787 };
788 #endif /* STRUCT_MALLINFO_DECLARED */
789 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
790 #endif /* NO_MALLINFO */
791
792 /*
793 Try to persuade compilers to inline. The most critical functions for
794 inlining are defined as macros, so these aren't used for them.
795 */
796
797 #ifndef FORCEINLINE
798 #if defined(__GNUC__)
799 #define FORCEINLINE __inline __attribute__ ((always_inline))
800 #elif defined(_MSC_VER)
801 #define FORCEINLINE __forceinline
802 #endif
803 #endif
804 #ifndef NOINLINE
805 #if defined(__GNUC__)
806 #define NOINLINE __attribute__ ((noinline))
807 #elif defined(_MSC_VER)
808 #define NOINLINE __declspec(noinline)
809 #else
810 #define NOINLINE
811 #endif
812 #endif
813
814 #ifdef __cplusplus
815 extern "C" {
816 #ifndef FORCEINLINE
817 #define FORCEINLINE inline
818 #endif
819 #endif /* __cplusplus */
820 #ifndef FORCEINLINE
821 #define FORCEINLINE
822 #endif
823
824 #if !ONLY_MSPACES
825
826 /* ------------------- Declarations of public routines ------------------- */
827
828 #ifndef USE_DL_PREFIX
829 #define dlcalloc calloc
830 #define dlfree free
831 #define dlmalloc malloc
832 #define dlmemalign memalign
833 #define dlposix_memalign posix_memalign
834 #define dlrealloc realloc
835 #define dlrealloc_in_place realloc_in_place
836 #define dlvalloc valloc
837 #define dlpvalloc pvalloc
838 #define dlmallinfo mallinfo
839 #define dlmallopt mallopt
840 #define dlmalloc_trim malloc_trim
841 #define dlmalloc_stats malloc_stats
842 #define dlmalloc_usable_size malloc_usable_size
843 #define dlmalloc_footprint malloc_footprint
844 #define dlmalloc_max_footprint malloc_max_footprint
845 #define dlmalloc_footprint_limit malloc_footprint_limit
846 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
847 #define dlmalloc_inspect_all malloc_inspect_all
848 #define dlindependent_calloc independent_calloc
849 #define dlindependent_comalloc independent_comalloc
850 #define dlbulk_free bulk_free
851 #endif /* USE_DL_PREFIX */
852
853 /*
854 malloc(size_t n)
855 Returns a pointer to a newly allocated chunk of at least n bytes, or
856 null if no space is available, in which case errno is set to ENOMEM
857 on ANSI C systems.
858
859 If n is zero, malloc returns a minimum-sized chunk. (The minimum
860 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
861 systems.) Note that size_t is an unsigned type, so calls with
862 arguments that would be negative if signed are interpreted as
863 requests for huge amounts of space, which will often fail. The
864 maximum supported value of n differs across systems, but is in all
865 cases less than the maximum representable value of a size_t.
866 */
867 DLMALLOC_EXPORT void* dlmalloc(size_t);
868
869 /*
870 free(void* p)
871 Releases the chunk of memory pointed to by p, that had been previously
872 allocated using malloc or a related routine such as realloc.
873 It has no effect if p is null. If p was not malloced or already
874 freed, free(p) will by default cause the current program to abort.
875 */
876 DLMALLOC_EXPORT void dlfree(void*);
877
878 /*
879 calloc(size_t n_elements, size_t element_size);
880 Returns a pointer to n_elements * element_size bytes, with all locations
881 set to zero.
882 */
883 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
884
885 /*
886 realloc(void* p, size_t n)
887 Returns a pointer to a chunk of size n that contains the same data
888 as does chunk p up to the minimum of (n, p's size) bytes, or null
889 if no space is available.
890
891 The returned pointer may or may not be the same as p. The algorithm
892 prefers extending p in most cases when possible, otherwise it
893 employs the equivalent of a malloc-copy-free sequence.
894
895 If p is null, realloc is equivalent to malloc.
896
897 If space is not available, realloc returns null, errno is set (if on
898 ANSI) and p is NOT freed.
899
900 if n is for fewer bytes than already held by p, the newly unused
901 space is lopped off and freed if possible. realloc with a size
902 argument of zero (re)allocates a minimum-sized chunk.
903
904 The old unix realloc convention of allowing the last-free'd chunk
905 to be used as an argument to realloc is not supported.
906 */
907 DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
908
909 /*
910 realloc_in_place(void* p, size_t n)
911 Resizes the space allocated for p to size n, only if this can be
912 done without moving p (i.e., only if there is adjacent space
913 available if n is greater than p's current allocated size, or n is
914 less than or equal to p's size). This may be used instead of plain
915 realloc if an alternative allocation strategy is needed upon failure
916 to expand space; for example, reallocation of a buffer that must be
917 memory-aligned or cleared. You can use realloc_in_place to trigger
918 these alternatives only when needed.
919
920 Returns p if successful; otherwise null.
921 */
922 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
923
924 /*
925 memalign(size_t alignment, size_t n);
926 Returns a pointer to a newly allocated chunk of n bytes, aligned
927 in accord with the alignment argument.
928
929 The alignment argument should be a power of two. If the argument is
930 not a power of two, the nearest greater power is used.
931 8-byte alignment is guaranteed by normal malloc calls, so don't
932 bother calling memalign with an argument of 8 or less.
933
934 Overreliance on memalign is a sure way to fragment space.
935 */
936 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
937
938 /*
939 int posix_memalign(void** pp, size_t alignment, size_t n);
940 Allocates a chunk of n bytes, aligned in accord with the alignment
941 argument. Differs from memalign only in that it (1) assigns the
942 allocated memory to *pp rather than returning it, (2) fails and
943 returns EINVAL if the alignment is not a power of two (3) fails and
944 returns ENOMEM if memory cannot be allocated.
945 */
946 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
947
948 /*
949 valloc(size_t n);
950 Equivalent to memalign(pagesize, n), where pagesize is the page
951 size of the system. If the pagesize is unknown, 4096 is used.
952 */
953 DLMALLOC_EXPORT void* dlvalloc(size_t);
954
955 /*
956 mallopt(int parameter_number, int parameter_value)
957 Sets tunable parameters The format is to provide a
958 (parameter-number, parameter-value) pair. mallopt then sets the
959 corresponding parameter to the argument value if it can (i.e., so
960 long as the value is meaningful), and returns 1 if successful else
961 0. To workaround the fact that mallopt is specified to use int,
962 not size_t parameters, the value -1 is specially treated as the
963 maximum unsigned size_t value.
964
965 SVID/XPG/ANSI defines four standard param numbers for mallopt,
966 normally defined in malloc.h. None of these are use in this malloc,
967 so setting them has no effect. But this malloc also supports other
968 options in mallopt. See below for details. Briefly, supported
969 parameters are as follows (listed defaults are for "typical"
970 configurations).
971
972 Symbol param # default allowed param values
973 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
974 M_GRANULARITY -2 page size any power of 2 >= page size
975 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
976 */
977 DLMALLOC_EXPORT int dlmallopt(int, int);
978
979 /*
980 malloc_footprint();
981 Returns the number of bytes obtained from the system. The total
982 number of bytes allocated by malloc, realloc etc., is less than this
983 value. Unlike mallinfo, this function returns only a precomputed
984 result, so can be called frequently to monitor memory consumption.
985 Even if locks are otherwise defined, this function does not use them,
986 so results might not be up to date.
987 */
988 DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
989
990 /*
991 malloc_max_footprint();
992 Returns the maximum number of bytes obtained from the system. This
993 value will be greater than current footprint if deallocated space
994 has been reclaimed by the system. The peak number of bytes allocated
995 by malloc, realloc etc., is less than this value. Unlike mallinfo,
996 this function returns only a precomputed result, so can be called
997 frequently to monitor memory consumption. Even if locks are
998 otherwise defined, this function does not use them, so results might
999 not be up to date.
1000 */
1001 DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
1002
1003 /*
1004 malloc_footprint_limit();
1005 Returns the number of bytes that the heap is allowed to obtain from
1006 the system, returning the last value returned by
1007 malloc_set_footprint_limit, or the maximum size_t value if
1008 never set. The returned value reflects a permission. There is no
1009 guarantee that this number of bytes can actually be obtained from
1010 the system.
1011 */
1012 DLMALLOC_EXPORT size_t dlmalloc_footprint_limit();
1013
1014 /*
1015 malloc_set_footprint_limit();
1016 Sets the maximum number of bytes to obtain from the system, causing
1017 failure returns from malloc and related functions upon attempts to
1018 exceed this value. The argument value may be subject to page
1019 rounding to an enforceable limit; this actual value is returned.
1020 Using an argument of the maximum possible size_t effectively
1021 disables checks. If the argument is less than or equal to the
1022 current malloc_footprint, then all future allocations that require
1023 additional system memory will fail. However, invocation cannot
1024 retroactively deallocate existing used memory.
1025 */
1026 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
1027
1028 #if MALLOC_INSPECT_ALL
1029 /*
1030 malloc_inspect_all(void(*handler)(void *start,
1031 void *end,
1032 size_t used_bytes,
1033 void* callback_arg),
1034 void* arg);
1035 Traverses the heap and calls the given handler for each managed
1036 region, skipping all bytes that are (or may be) used for bookkeeping
1037 purposes. Traversal does not include include chunks that have been
1038 directly memory mapped. Each reported region begins at the start
1039 address, and continues up to but not including the end address. The
1040 first used_bytes of the region contain allocated data. If
1041 used_bytes is zero, the region is unallocated. The handler is
1042 invoked with the given callback argument. If locks are defined, they
1043 are held during the entire traversal. It is a bad idea to invoke
1044 other malloc functions from within the handler.
1045
1046 For example, to count the number of in-use chunks with size greater
1047 than 1000, you could write:
1048 static int count = 0;
1049 void count_chunks(void* start, void* end, size_t used, void* arg) {
1050 if (used >= 1000) ++count;
1051 }
1052 then:
1053 malloc_inspect_all(count_chunks, NULL);
1054
1055 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1056 */
1057 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
1058 void* arg);
1059
1060 #endif /* MALLOC_INSPECT_ALL */
1061
1062 #if !NO_MALLINFO
1063 /*
1064 mallinfo()
1065 Returns (by copy) a struct containing various summary statistics:
1066
1067 arena: current total non-mmapped bytes allocated from system
1068 ordblks: the number of free chunks
1069 smblks: always zero.
1070 hblks: current number of mmapped regions
1071 hblkhd: total bytes held in mmapped regions
1072 usmblks: the maximum total allocated space. This will be greater
1073 than current total if trimming has occurred.
1074 fsmblks: always zero
1075 uordblks: current total allocated space (normal or mmapped)
1076 fordblks: total free space
1077 keepcost: the maximum number of bytes that could ideally be released
1078 back to system via malloc_trim. ("ideally" means that
1079 it ignores page restrictions etc.)
1080
1081 Because these fields are ints, but internal bookkeeping may
1082 be kept as longs, the reported values may wrap around zero and
1083 thus be inaccurate.
1084 */
1085 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
1086 #endif /* NO_MALLINFO */
1087
1088 /*
1089 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1090
1091 independent_calloc is similar to calloc, but instead of returning a
1092 single cleared space, it returns an array of pointers to n_elements
1093 independent elements that can hold contents of size elem_size, each
1094 of which starts out cleared, and can be independently freed,
1095 realloc'ed etc. The elements are guaranteed to be adjacently
1096 allocated (this is not guaranteed to occur with multiple callocs or
1097 mallocs), which may also improve cache locality in some
1098 applications.
1099
1100 The "chunks" argument is optional (i.e., may be null, which is
1101 probably the most typical usage). If it is null, the returned array
1102 is itself dynamically allocated and should also be freed when it is
1103 no longer needed. Otherwise, the chunks array must be of at least
1104 n_elements in length. It is filled in with the pointers to the
1105 chunks.
1106
1107 In either case, independent_calloc returns this pointer array, or
1108 null if the allocation failed. If n_elements is zero and "chunks"
1109 is null, it returns a chunk representing an array with zero elements
1110 (which should be freed if not wanted).
1111
1112 Each element must be freed when it is no longer needed. This can be
1113 done all at once using bulk_free.
1114
1115 independent_calloc simplifies and speeds up implementations of many
1116 kinds of pools. It may also be useful when constructing large data
1117 structures that initially have a fixed number of fixed-sized nodes,
1118 but the number is not known at compile time, and some of the nodes
1119 may later need to be freed. For example:
1120
1121 struct Node { int item; struct Node* next; };
1122
1123 struct Node* build_list() {
1124 struct Node** pool;
1125 int n = read_number_of_nodes_needed();
1126 if (n <= 0) return 0;
1127 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1128 if (pool == 0) die();
1129 // organize into a linked list...
1130 struct Node* first = pool[0];
1131 for (i = 0; i < n-1; ++i)
1132 pool[i]->next = pool[i+1];
1133 free(pool); // Can now free the array (or not, if it is needed later)
1134 return first;
1135 }
1136 */
1137 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
1138
1139 /*
1140 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1141
1142 independent_comalloc allocates, all at once, a set of n_elements
1143 chunks with sizes indicated in the "sizes" array. It returns
1144 an array of pointers to these elements, each of which can be
1145 independently freed, realloc'ed etc. The elements are guaranteed to
1146 be adjacently allocated (this is not guaranteed to occur with
1147 multiple callocs or mallocs), which may also improve cache locality
1148 in some applications.
1149
1150 The "chunks" argument is optional (i.e., may be null). If it is null
1151 the returned array is itself dynamically allocated and should also
1152 be freed when it is no longer needed. Otherwise, the chunks array
1153 must be of at least n_elements in length. It is filled in with the
1154 pointers to the chunks.
1155
1156 In either case, independent_comalloc returns this pointer array, or
1157 null if the allocation failed. If n_elements is zero and chunks is
1158 null, it returns a chunk representing an array with zero elements
1159 (which should be freed if not wanted).
1160
1161 Each element must be freed when it is no longer needed. This can be
1162 done all at once using bulk_free.
1163
1164 independent_comallac differs from independent_calloc in that each
1165 element may have a different size, and also that it does not
1166 automatically clear elements.
1167
1168 independent_comalloc can be used to speed up allocation in cases
1169 where several structs or objects must always be allocated at the
1170 same time. For example:
1171
1172 struct Head { ... }
1173 struct Foot { ... }
1174
1175 void send_message(char* msg) {
1176 int msglen = strlen(msg);
1177 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1178 void* chunks[3];
1179 if (independent_comalloc(3, sizes, chunks) == 0)
1180 die();
1181 struct Head* head = (struct Head*)(chunks[0]);
1182 char* body = (char*)(chunks[1]);
1183 struct Foot* foot = (struct Foot*)(chunks[2]);
1184 // ...
1185 }
1186
1187 In general though, independent_comalloc is worth using only for
1188 larger values of n_elements. For small values, you probably won't
1189 detect enough difference from series of malloc calls to bother.
1190
1191 Overuse of independent_comalloc can increase overall memory usage,
1192 since it cannot reuse existing noncontiguous small chunks that
1193 might be available for some of the elements.
1194 */
1195 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
1196
1197 /*
1198 bulk_free(void* array[], size_t n_elements)
1199 Frees and clears (sets to null) each non-null pointer in the given
1200 array. This is likely to be faster than freeing them one-by-one.
1201 If footers are used, pointers that have been allocated in different
1202 mspaces are not freed or cleared, and the count of all such pointers
1203 is returned. For large arrays of pointers with poor locality, it
1204 may be worthwhile to sort this array before calling bulk_free.
1205 */
1206 DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);
1207
1208 /*
1209 pvalloc(size_t n);
1210 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1211 round up n to nearest pagesize.
1212 */
1213 DLMALLOC_EXPORT void* dlpvalloc(size_t);
1214
1215 /*
1216 malloc_trim(size_t pad);
1217
1218 If possible, gives memory back to the system (via negative arguments
1219 to sbrk) if there is unused memory at the `high' end of the malloc
1220 pool or in unused MMAP segments. You can call this after freeing
1221 large blocks of memory to potentially reduce the system-level memory
1222 requirements of a program. However, it cannot guarantee to reduce
1223 memory. Under some allocation patterns, some large free blocks of
1224 memory will be locked between two used chunks, so they cannot be
1225 given back to the system.
1226
1227 The `pad' argument to malloc_trim represents the amount of free
1228 trailing space to leave untrimmed. If this argument is zero, only
1229 the minimum amount of memory to maintain internal data structures
1230 will be left. Non-zero arguments can be supplied to maintain enough
1231 trailing space to service future expected allocations without having
1232 to re-obtain memory from the system.
1233
1234 Malloc_trim returns 1 if it actually released any memory, else 0.
1235 */
1236 DLMALLOC_EXPORT int dlmalloc_trim(size_t);
1237
1238 /*
1239 malloc_stats();
1240 Prints on stderr the amount of space obtained from the system (both
1241 via sbrk and mmap), the maximum amount (which may be more than
1242 current if malloc_trim and/or munmap got called), and the current
1243 number of bytes allocated via malloc (or realloc, etc) but not yet
1244 freed. Note that this is the number of bytes allocated, not the
1245 number requested. It will be larger than the number requested
1246 because of alignment and bookkeeping overhead. Because it includes
1247 alignment wastage as being in use, this figure may be greater than
1248 zero even when no user-level chunks are allocated.
1249
1250 The reported current and maximum system memory can be inaccurate if
1251 a program makes other calls to system memory allocation functions
1252 (normally sbrk) outside of malloc.
1253
1254 malloc_stats prints only the most commonly interesting statistics.
1255 More information can be obtained by calling mallinfo.
1256 */
1257 DLMALLOC_EXPORT void dlmalloc_stats(void);
1258
1259 /*
1260 malloc_usable_size(void* p);
1261
1262 Returns the number of bytes you can actually use in
1263 an allocated chunk, which may be more than you requested (although
1264 often not) due to alignment and minimum size constraints.
1265 You can use this many bytes without worrying about
1266 overwriting other allocated objects. This is not a particularly great
1267 programming practice. malloc_usable_size can be more useful in
1268 debugging and assertions, for example:
1269
1270 p = malloc(n);
1271 assert(malloc_usable_size(p) >= 256);
1272 */
1273 size_t dlmalloc_usable_size(void*);
1274
1275 #endif /* ONLY_MSPACES */
1276
1277 #if MSPACES
1278
1279 /*
1280 mspace is an opaque type representing an independent
1281 region of space that supports mspace_malloc, etc.
1282 */
1283 typedef void* mspace;
1284
1285 /*
1286 create_mspace creates and returns a new independent space with the
1287 given initial capacity, or, if 0, the default granularity size. It
1288 returns null if there is no system memory available to create the
1289 space. If argument locked is non-zero, the space uses a separate
1290 lock to control access. The capacity of the space will grow
1291 dynamically as needed to service mspace_malloc requests. You can
1292 control the sizes of incremental increases of this space by
1293 compiling with a different DEFAULT_GRANULARITY or dynamically
1294 setting with mallopt(M_GRANULARITY, value).
1295 */
1296 DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1297
1298 /*
1299 destroy_mspace destroys the given space, and attempts to return all
1300 of its memory back to the system, returning the total number of
1301 bytes freed. After destruction, the results of access to all memory
1302 used by the space become undefined.
1303 */
1304 DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1305
1306 /*
1307 create_mspace_with_base uses the memory supplied as the initial base
1308 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1309 space is used for bookkeeping, so the capacity must be at least this
1310 large. (Otherwise 0 is returned.) When this initial space is
1311 exhausted, additional memory will be obtained from the system.
1312 Destroying this space will deallocate all additionally allocated
1313 space (if possible) but not the initial base.
1314 */
1315 DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1316
1317 /*
1318 mspace_track_large_chunks controls whether requests for large chunks
1319 are allocated in their own untracked mmapped regions, separate from
1320 others in this mspace. By default large chunks are not tracked,
1321 which reduces fragmentation. However, such chunks are not
1322 necessarily released to the system upon destroy_mspace. Enabling
1323 tracking by setting to true may increase fragmentation, but avoids
1324 leakage when relying on destroy_mspace to release all memory
1325 allocated using this space. The function returns the previous
1326 setting.
1327 */
1328 DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1329
1330
1331 /*
1332 mspace_malloc behaves as malloc, but operates within
1333 the given space.
1334 */
1335 DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1336
1337 /*
1338 mspace_free behaves as free, but operates within
1339 the given space.
1340
1341 If compiled with FOOTERS==1, mspace_free is not actually needed.
1342 free may be called instead of mspace_free because freed chunks from
1343 any space are handled by their originating spaces.
1344 */
1345 DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
1346
1347 /*
1348 mspace_realloc behaves as realloc, but operates within
1349 the given space.
1350
1351 If compiled with FOOTERS==1, mspace_realloc is not actually
1352 needed. realloc may be called instead of mspace_realloc because
1353 realloced chunks from any space are handled by their originating
1354 spaces.
1355 */
1356 DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1357
1358 /*
1359 mspace_calloc behaves as calloc, but operates within
1360 the given space.
1361 */
1362 DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1363
1364 /*
1365 mspace_memalign behaves as memalign, but operates within
1366 the given space.
1367 */
1368 DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1369
1370 /*
1371 mspace_independent_calloc behaves as independent_calloc, but
1372 operates within the given space.
1373 */
1374 DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1375 size_t elem_size, void* chunks[]);
1376
1377 /*
1378 mspace_independent_comalloc behaves as independent_comalloc, but
1379 operates within the given space.
1380 */
1381 DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1382 size_t sizes[], void* chunks[]);
1383
1384 /*
1385 mspace_footprint() returns the number of bytes obtained from the
1386 system for this space.
1387 */
1388 DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1389
1390 /*
1391 mspace_max_footprint() returns the peak number of bytes obtained from the
1392 system for this space.
1393 */
1394 DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1395
1396
1397 #if !NO_MALLINFO
1398 /*
1399 mspace_mallinfo behaves as mallinfo, but reports properties of
1400 the given space.
1401 */
1402 DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1403 #endif /* NO_MALLINFO */
1404
1405 /*
1406 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1407 */
1408 DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);
1409
1410 /*
1411 mspace_malloc_stats behaves as malloc_stats, but reports
1412 properties of the given space.
1413 */
1414 DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1415
1416 /*
1417 mspace_trim behaves as malloc_trim, but
1418 operates within the given space.
1419 */
1420 DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1421
1422 /*
1423 An alias for mallopt.
1424 */
1425 DLMALLOC_EXPORT int mspace_mallopt(int, int);
1426
1427 #endif /* MSPACES */
1428
1429 #ifdef __cplusplus
1430 } /* end of extern "C" */
1431 #endif /* __cplusplus */
1432
1433 /*
1434 ========================================================================
1435 To make a fully customizable malloc.h header file, cut everything
1436 above this line, put into file malloc.h, edit to suit, and #include it
1437 on the next line, as well as in programs that use this malloc.
1438 ========================================================================
1439 */
1440
1441 /* #include "malloc.h" */
1442
1443 /*------------------------------ internal #includes ---------------------- */
1444
1445 #ifdef _MSC_VER
1446 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1447 #endif /* _MSC_VER */
1448 #if !NO_MALLOC_STATS
1449 #include <stdio.h> /* for printing in malloc_stats */
1450 #endif /* NO_MALLOC_STATS */
1451 #ifndef LACKS_ERRNO_H
1452 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1453 #endif /* LACKS_ERRNO_H */
1454 #ifdef DEBUG
1455 #if ABORT_ON_ASSERT_FAILURE
1456 #undef assert
1457 #define assert(x) if(!(x)) ABORT
1458 #else /* ABORT_ON_ASSERT_FAILURE */
1459 #include <assert.h>
1460 #endif /* ABORT_ON_ASSERT_FAILURE */
1461 #else /* DEBUG */
1462 #ifndef assert
1463 #define assert(x)
1464 #endif
1465 #define DEBUG 0
1466 #endif /* DEBUG */
1467 #if !defined(WIN32) && !defined(LACKS_TIME_H)
1468 #include <time.h> /* for magic initialization */
1469 #endif /* WIN32 */
1470 #ifndef LACKS_STDLIB_H
1471 #include <stdlib.h> /* for abort() */
1472 #endif /* LACKS_STDLIB_H */
1473 #ifndef LACKS_STRING_H
1474 #include <string.h> /* for memset etc */
1475 #endif /* LACKS_STRING_H */
1476 #if USE_BUILTIN_FFS
1477 #ifndef LACKS_STRINGS_H
1478 #include <strings.h> /* for ffs */
1479 #endif /* LACKS_STRINGS_H */
1480 #endif /* USE_BUILTIN_FFS */
1481 #if HAVE_MMAP
1482 #ifndef LACKS_SYS_MMAN_H
1483 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1484 #if (defined(linux) && !defined(__USE_GNU))
1485 #define __USE_GNU 1
1486 #include <sys/mman.h> /* for mmap */
1487 #undef __USE_GNU
1488 #else
1489 #include <sys/mman.h> /* for mmap */
1490 #endif /* linux */
1491 #endif /* LACKS_SYS_MMAN_H */
1492 #ifndef LACKS_FCNTL_H
1493 #include <fcntl.h>
1494 #endif /* LACKS_FCNTL_H */
1495 #endif /* HAVE_MMAP */
1496 #ifndef LACKS_UNISTD_H
1497 #include <unistd.h> /* for sbrk, sysconf */
1498 #else /* LACKS_UNISTD_H */
1499 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1500 extern void* sbrk(ptrdiff_t);
1501 #endif /* FreeBSD etc */
1502 #endif /* LACKS_UNISTD_H */
1503
1504 /* Declarations for locking */
1505 #if USE_LOCKS
1506 #ifndef WIN32
1507 #if defined (__SVR4) && defined (__sun) /* solaris */
1508 #include <thread.h>
1509 #elif !defined(LACKS_SCHED_H)
1510 #include <sched.h>
1511 #endif /* solaris or LACKS_SCHED_H */
1512 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1513 #include <pthread.h>
1514 #endif /* USE_RECURSIVE_LOCKS ... */
1515 #elif defined(_MSC_VER)
1516 #ifndef _M_AMD64
1517 /* These are already defined on AMD64 builds */
1518 #ifdef __cplusplus
1519 extern "C" {
1520 #endif /* __cplusplus */
1521 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1522 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1523 #ifdef __cplusplus
1524 }
1525 #endif /* __cplusplus */
1526 #endif /* _M_AMD64 */
1527 #pragma intrinsic (_InterlockedCompareExchange)
1528 #pragma intrinsic (_InterlockedExchange)
1529 #define interlockedcompareexchange _InterlockedCompareExchange
1530 #define interlockedexchange _InterlockedExchange
1531 #elif defined(WIN32) && defined(__GNUC__)
1532 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1533 #define interlockedexchange __sync_lock_test_and_set
1534 #endif /* Win32 */
1535 #else /* USE_LOCKS */
1536 #endif /* USE_LOCKS */
1537
1538 #ifndef LOCK_AT_FORK
1539 #define LOCK_AT_FORK 0
1540 #endif
1541
1542 /* Declarations for bit scanning on win32 */
1543 #if defined(_MSC_VER) && _MSC_VER>=1300
1544 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1545 #ifdef __cplusplus
1546 extern "C" {
1547 #endif /* __cplusplus */
1548 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1549 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1550 #ifdef __cplusplus
1551 }
1552 #endif /* __cplusplus */
1553
1554 #define BitScanForward _BitScanForward
1555 #define BitScanReverse _BitScanReverse
1556 #pragma intrinsic(_BitScanForward)
1557 #pragma intrinsic(_BitScanReverse)
1558 #endif /* BitScanForward */
1559 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1560
1561 #ifndef WIN32
1562 #ifndef malloc_getpagesize
1563 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1564 # ifndef _SC_PAGE_SIZE
1565 # define _SC_PAGE_SIZE _SC_PAGESIZE
1566 # endif
1567 # endif
1568 # ifdef _SC_PAGE_SIZE
1569 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1570 # else
1571 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1572 extern size_t getpagesize();
1573 # define malloc_getpagesize getpagesize()
1574 # else
1575 # ifdef WIN32 /* use supplied emulation of getpagesize */
1576 # define malloc_getpagesize getpagesize()
1577 # else
1578 # ifndef LACKS_SYS_PARAM_H
1579 # include <sys/param.h>
1580 # endif
1581 # ifdef EXEC_PAGESIZE
1582 # define malloc_getpagesize EXEC_PAGESIZE
1583 # else
1584 # ifdef NBPG
1585 # ifndef CLSIZE
1586 # define malloc_getpagesize NBPG
1587 # else
1588 # define malloc_getpagesize (NBPG * CLSIZE)
1589 # endif
1590 # else
1591 # ifdef NBPC
1592 # define malloc_getpagesize NBPC
1593 # else
1594 # ifdef PAGESIZE
1595 # define malloc_getpagesize PAGESIZE
1596 # else /* just guess */
1597 # define malloc_getpagesize ((size_t)4096U)
1598 # endif
1599 # endif
1600 # endif
1601 # endif
1602 # endif
1603 # endif
1604 # endif
1605 #endif
1606 #endif
1607
1608 /* ------------------- size_t and alignment properties -------------------- */
1609
1610 /* The byte and bit size of a size_t */
1611 #define SIZE_T_SIZE (sizeof(size_t))
1612 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1613
1614 /* Some constants coerced to size_t */
1615 /* Annoying but necessary to avoid errors on some platforms */
1616 #define SIZE_T_ZERO ((size_t)0)
1617 #define SIZE_T_ONE ((size_t)1)
1618 #define SIZE_T_TWO ((size_t)2)
1619 #define SIZE_T_FOUR ((size_t)4)
1620 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1621 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1622 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1623 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1624
1625 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1626 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1627
1628 /* True if address a has acceptable alignment */
1629 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1630
1631 /* the number of bytes to offset an address to align it */
1632 #define align_offset(A)\
1633 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1634 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1635
1636 /* -------------------------- MMAP preliminaries ------------------------- */
1637
1638 /*
1639 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1640 checks to fail so compiler optimizer can delete code rather than
1641 using so many "#if"s.
1642 */
1643
1644
1645 /* MORECORE and MMAP must return MFAIL on failure */
1646 #define MFAIL ((void*)(MAX_SIZE_T))
1647 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1648
1649 #if HAVE_MMAP
1650
1651 #ifndef WIN32
1652 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1653 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1654 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1655 #define MAP_ANONYMOUS MAP_ANON
1656 #endif /* MAP_ANON */
1657 #ifdef MAP_ANONYMOUS
1658 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1659 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1660 #else /* MAP_ANONYMOUS */
1661 /*
1662 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1663 is unlikely to be needed, but is supplied just in case.
1664 */
1665 #define MMAP_FLAGS (MAP_PRIVATE)
1666 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1667 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1668 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1669 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1670 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1671 #endif /* MAP_ANONYMOUS */
1672
1673 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1674
1675 #else /* WIN32 */
1676
1677 /* Win32 MMAP via VirtualAlloc */
win32mmap(size_t size)1678 static FORCEINLINE void* win32mmap(size_t size) {
1679 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1680 return (ptr != 0)? ptr: MFAIL;
1681 }
1682
1683 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
win32direct_mmap(size_t size)1684 static FORCEINLINE void* win32direct_mmap(size_t size) {
1685 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1686 PAGE_READWRITE);
1687 return (ptr != 0)? ptr: MFAIL;
1688 }
1689
1690 /* This function supports releasing coalesed segments */
win32munmap(void * ptr,size_t size)1691 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1692 MEMORY_BASIC_INFORMATION minfo;
1693 char* cptr = (char*)ptr;
1694 while (size) {
1695 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1696 return -1;
1697 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1698 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1699 return -1;
1700 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1701 return -1;
1702 cptr += minfo.RegionSize;
1703 size -= minfo.RegionSize;
1704 }
1705 return 0;
1706 }
1707
1708 #define MMAP_DEFAULT(s) win32mmap(s)
1709 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1710 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1711 #endif /* WIN32 */
1712 #endif /* HAVE_MMAP */
1713
1714 #if HAVE_MREMAP
1715 #ifndef WIN32
1716 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1717 #endif /* WIN32 */
1718 #endif /* HAVE_MREMAP */
1719
1720 /**
1721 * Define CALL_MORECORE
1722 */
1723 #if HAVE_MORECORE
1724 #ifdef MORECORE
1725 #define CALL_MORECORE(S) MORECORE(S)
1726 #else /* MORECORE */
1727 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1728 #endif /* MORECORE */
1729 #else /* HAVE_MORECORE */
1730 #define CALL_MORECORE(S) MFAIL
1731 #endif /* HAVE_MORECORE */
1732
1733 /**
1734 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1735 */
1736 #if HAVE_MMAP
1737 #define USE_MMAP_BIT (SIZE_T_ONE)
1738
1739 #ifdef MMAP
1740 #define CALL_MMAP(s) MMAP(s)
1741 #else /* MMAP */
1742 #define CALL_MMAP(s) MMAP_DEFAULT(s)
1743 #endif /* MMAP */
1744 #ifdef MUNMAP
1745 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1746 #else /* MUNMAP */
1747 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1748 #endif /* MUNMAP */
1749 #ifdef DIRECT_MMAP
1750 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1751 #else /* DIRECT_MMAP */
1752 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1753 #endif /* DIRECT_MMAP */
1754 #else /* HAVE_MMAP */
1755 #define USE_MMAP_BIT (SIZE_T_ZERO)
1756
1757 #define MMAP(s) MFAIL
1758 #define MUNMAP(a, s) (-1)
1759 #define DIRECT_MMAP(s) MFAIL
1760 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1761 #define CALL_MMAP(s) MMAP(s)
1762 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1763 #endif /* HAVE_MMAP */
1764
1765 /**
1766 * Define CALL_MREMAP
1767 */
1768 #if HAVE_MMAP && HAVE_MREMAP
1769 #ifdef MREMAP
1770 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1771 #else /* MREMAP */
1772 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1773 #endif /* MREMAP */
1774 #else /* HAVE_MMAP && HAVE_MREMAP */
1775 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1776 #endif /* HAVE_MMAP && HAVE_MREMAP */
1777
1778 /* mstate bit set if continguous morecore disabled or failed */
1779 #define USE_NONCONTIGUOUS_BIT (4U)
1780
1781 /* segment bit set in create_mspace_with_base */
1782 #define EXTERN_BIT (8U)
1783
1784
1785 /* --------------------------- Lock preliminaries ------------------------ */
1786
1787 /*
1788 When locks are defined, there is one global lock, plus
1789 one per-mspace lock.
1790
1791 The global lock_ensures that mparams.magic and other unique
1792 mparams values are initialized only once. It also protects
1793 sequences of calls to MORECORE. In many cases sys_alloc requires
1794 two calls, that should not be interleaved with calls by other
1795 threads. This does not protect against direct calls to MORECORE
1796 by other threads not using this lock, so there is still code to
1797 cope the best we can on interference.
1798
1799 Per-mspace locks surround calls to malloc, free, etc.
1800 By default, locks are simple non-reentrant mutexes.
1801
1802 Because lock-protected regions generally have bounded times, it is
1803 OK to use the supplied simple spinlocks. Spinlocks are likely to
1804 improve performance for lightly contended applications, but worsen
1805 performance under heavy contention.
1806
1807 If USE_LOCKS is > 1, the definitions of lock routines here are
1808 bypassed, in which case you will need to define the type MLOCK_T,
1809 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1810 and TRY_LOCK. You must also declare a
1811 static MLOCK_T malloc_global_mutex = { initialization values };.
1812
1813 */
1814
1815 #if !USE_LOCKS
1816 #define USE_LOCK_BIT (0U)
1817 #define INITIAL_LOCK(l) (0)
1818 #define DESTROY_LOCK(l) (0)
1819 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1820 #define RELEASE_MALLOC_GLOBAL_LOCK()
1821
1822 #else
1823 #if USE_LOCKS > 1
1824 /* ----------------------- User-defined locks ------------------------ */
1825 /* Define your own lock implementation here */
1826 /* #define INITIAL_LOCK(lk) ... */
1827 /* #define DESTROY_LOCK(lk) ... */
1828 /* #define ACQUIRE_LOCK(lk) ... */
1829 /* #define RELEASE_LOCK(lk) ... */
1830 /* #define TRY_LOCK(lk) ... */
1831 /* static MLOCK_T malloc_global_mutex = ... */
1832
1833 #elif USE_SPIN_LOCKS
1834
1835 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
1836 /* Note CAS_LOCK defined to return 0 on success */
1837
1838 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1839 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1840 #define CLEAR_LOCK(sl) __sync_lock_release(sl)
1841
1842 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1843 /* Custom spin locks for older gcc on x86 */
x86_cas_lock(int * sl)1844 static FORCEINLINE int x86_cas_lock(int *sl) {
1845 int ret;
1846 int val = 1;
1847 int cmp = 0;
1848 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1849 : "=a" (ret)
1850 : "r" (val), "m" (*(sl)), "0"(cmp)
1851 : "memory", "cc");
1852 return ret;
1853 }
1854
x86_clear_lock(int * sl)1855 static FORCEINLINE void x86_clear_lock(int* sl) {
1856 assert(*sl != 0);
1857 int prev = 0;
1858 int ret;
1859 __asm__ __volatile__ ("lock; xchgl %0, %1"
1860 : "=r" (ret)
1861 : "m" (*(sl)), "0"(prev)
1862 : "memory");
1863 }
1864
1865 #define CAS_LOCK(sl) x86_cas_lock(sl)
1866 #define CLEAR_LOCK(sl) x86_clear_lock(sl)
1867
1868 #else /* Win32 MSC */
1869 #define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1)
1870 #define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0)
1871
1872 #endif /* ... gcc spins locks ... */
1873
1874 /* How to yield for a spin lock */
1875 #define SPINS_PER_YIELD 63
1876 #if defined(_MSC_VER)
1877 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1878 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1879 #elif defined (__SVR4) && defined (__sun) /* solaris */
1880 #define SPIN_LOCK_YIELD thr_yield();
1881 #elif !defined(LACKS_SCHED_H)
1882 #define SPIN_LOCK_YIELD sched_yield();
1883 #else
1884 #define SPIN_LOCK_YIELD
1885 #endif /* ... yield ... */
1886
1887 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1888 /* Plain spin locks use single word (embedded in malloc_states) */
spin_acquire_lock(int * sl)1889 static int spin_acquire_lock(int *sl) {
1890 int spins = 0;
1891 while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
1892 if ((++spins & SPINS_PER_YIELD) == 0) {
1893 SPIN_LOCK_YIELD;
1894 }
1895 }
1896 return 0;
1897 }
1898
1899 #define MLOCK_T int
1900 #define TRY_LOCK(sl) !CAS_LOCK(sl)
1901 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1902 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1903 #define INITIAL_LOCK(sl) (*sl = 0)
1904 #define DESTROY_LOCK(sl) (0)
1905 static MLOCK_T malloc_global_mutex = 0;
1906
1907 #else /* USE_RECURSIVE_LOCKS */
1908 /* types for lock owners */
1909 #ifdef WIN32
1910 #define THREAD_ID_T DWORD
1911 #define CURRENT_THREAD GetCurrentThreadId()
1912 #define EQ_OWNER(X,Y) ((X) == (Y))
1913 #else
1914 /*
1915 Note: the following assume that pthread_t is a type that can be
1916 initialized to (casted) zero. If this is not the case, you will need to
1917 somehow redefine these or not use spin locks.
1918 */
1919 #define THREAD_ID_T pthread_t
1920 #define CURRENT_THREAD pthread_self()
1921 #define EQ_OWNER(X,Y) pthread_equal(X, Y)
1922 #endif
1923
1924 struct malloc_recursive_lock {
1925 int sl;
1926 unsigned int c;
1927 THREAD_ID_T threadid;
1928 };
1929
1930 #define MLOCK_T struct malloc_recursive_lock
1931 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
1932
recursive_release_lock(MLOCK_T * lk)1933 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
1934 assert(lk->sl != 0);
1935 if (--lk->c == 0) {
1936 CLEAR_LOCK(&lk->sl);
1937 }
1938 }
1939
recursive_acquire_lock(MLOCK_T * lk)1940 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
1941 THREAD_ID_T mythreadid = CURRENT_THREAD;
1942 int spins = 0;
1943 for (;;) {
1944 if (*((volatile int *)(&lk->sl)) == 0) {
1945 if (!CAS_LOCK(&lk->sl)) {
1946 lk->threadid = mythreadid;
1947 lk->c = 1;
1948 return 0;
1949 }
1950 }
1951 else if (EQ_OWNER(lk->threadid, mythreadid)) {
1952 ++lk->c;
1953 return 0;
1954 }
1955 if ((++spins & SPINS_PER_YIELD) == 0) {
1956 SPIN_LOCK_YIELD;
1957 }
1958 }
1959 }
1960
recursive_try_lock(MLOCK_T * lk)1961 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
1962 THREAD_ID_T mythreadid = CURRENT_THREAD;
1963 if (*((volatile int *)(&lk->sl)) == 0) {
1964 if (!CAS_LOCK(&lk->sl)) {
1965 lk->threadid = mythreadid;
1966 lk->c = 1;
1967 return 1;
1968 }
1969 }
1970 else if (EQ_OWNER(lk->threadid, mythreadid)) {
1971 ++lk->c;
1972 return 1;
1973 }
1974 return 0;
1975 }
1976
1977 #define RELEASE_LOCK(lk) recursive_release_lock(lk)
1978 #define TRY_LOCK(lk) recursive_try_lock(lk)
1979 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
1980 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1981 #define DESTROY_LOCK(lk) (0)
1982 #endif /* USE_RECURSIVE_LOCKS */
1983
1984 #elif defined(WIN32) /* Win32 critical sections */
1985 #define MLOCK_T CRITICAL_SECTION
1986 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
1987 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
1988 #define TRY_LOCK(lk) TryEnterCriticalSection(lk)
1989 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1990 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
1991 #define NEED_GLOBAL_LOCK_INIT
1992
1993 static MLOCK_T malloc_global_mutex;
1994 static volatile LONG malloc_global_mutex_status;
1995
1996 /* Use spin loop to initialize global lock */
init_malloc_global_mutex()1997 static void init_malloc_global_mutex() {
1998 for (;;) {
1999 long stat = malloc_global_mutex_status;
2000 if (stat > 0)
2001 return;
2002 /* transition to < 0 while initializing, then to > 0) */
2003 if (stat == 0 &&
2004 interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {
2005 InitializeCriticalSection(&malloc_global_mutex);
2006 interlockedexchange(&malloc_global_mutex_status, (LONG)1);
2007 return;
2008 }
2009 SleepEx(0, FALSE);
2010 }
2011 }
2012
2013 #else /* pthreads-based locks */
2014 #define MLOCK_T pthread_mutex_t
2015 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2016 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2017 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2018 #define INITIAL_LOCK(lk) pthread_init_lock(lk)
2019 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2020
2021 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2022 /* Cope with old-style linux recursive lock initialization by adding */
2023 /* skipped internal declaration from pthread.h */
2024 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
2025 int __kind));
2026 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2027 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2028 #endif /* USE_RECURSIVE_LOCKS ... */
2029
2030 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2031
pthread_init_lock(MLOCK_T * lk)2032 static int pthread_init_lock (MLOCK_T *lk) {
2033 pthread_mutexattr_t attr;
2034 if (pthread_mutexattr_init(&attr)) return 1;
2035 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2036 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2037 #endif
2038 if (pthread_mutex_init(lk, &attr)) return 1;
2039 if (pthread_mutexattr_destroy(&attr)) return 1;
2040 return 0;
2041 }
2042
2043 #endif /* ... lock types ... */
2044
2045 /* Common code for all lock types */
2046 #define USE_LOCK_BIT (2U)
2047
2048 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2049 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2050 #endif
2051
2052 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
2053 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2054 #endif
2055
2056 #endif /* USE_LOCKS */
2057
2058 /* ----------------------- Chunk representations ------------------------ */
2059
2060 /*
2061 (The following includes lightly edited explanations by Colin Plumb.)
2062
2063 The malloc_chunk declaration below is misleading (but accurate and
2064 necessary). It declares a "view" into memory allowing access to
2065 necessary fields at known offsets from a given base.
2066
2067 Chunks of memory are maintained using a `boundary tag' method as
2068 originally described by Knuth. (See the paper by Paul Wilson
2069 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2070 techniques.) Sizes of free chunks are stored both in the front of
2071 each chunk and at the end. This makes consolidating fragmented
2072 chunks into bigger chunks fast. The head fields also hold bits
2073 representing whether chunks are free or in use.
2074
2075 Here are some pictures to make it clearer. They are "exploded" to
2076 show that the state of a chunk can be thought of as extending from
2077 the high 31 bits of the head field of its header through the
2078 prev_foot and PINUSE_BIT bit of the following chunk header.
2079
2080 A chunk that's in use looks like:
2081
2082 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2083 | Size of previous chunk (if P = 0) |
2084 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2085 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2086 | Size of this chunk 1| +-+
2087 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2088 | |
2089 +- -+
2090 | |
2091 +- -+
2092 | :
2093 +- size - sizeof(size_t) available payload bytes -+
2094 : |
2095 chunk-> +- -+
2096 | |
2097 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2098 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2099 | Size of next chunk (may or may not be in use) | +-+
2100 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2101
2102 And if it's free, it looks like this:
2103
2104 chunk-> +- -+
2105 | User payload (must be in use, or we would have merged!) |
2106 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2107 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2108 | Size of this chunk 0| +-+
2109 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2110 | Next pointer |
2111 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2112 | Prev pointer |
2113 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2114 | :
2115 +- size - sizeof(struct chunk) unused bytes -+
2116 : |
2117 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2118 | Size of this chunk |
2119 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2120 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2121 | Size of next chunk (must be in use, or we would have merged)| +-+
2122 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2123 | :
2124 +- User payload -+
2125 : |
2126 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2127 |0|
2128 +-+
2129 Note that since we always merge adjacent free chunks, the chunks
2130 adjacent to a free chunk must be in use.
2131
2132 Given a pointer to a chunk (which can be derived trivially from the
2133 payload pointer) we can, in O(1) time, find out whether the adjacent
2134 chunks are free, and if so, unlink them from the lists that they
2135 are on and merge them with the current chunk.
2136
2137 Chunks always begin on even word boundaries, so the mem portion
2138 (which is returned to the user) is also on an even word boundary, and
2139 thus at least double-word aligned.
2140
2141 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2142 chunk size (which is always a multiple of two words), is an in-use
2143 bit for the *previous* chunk. If that bit is *clear*, then the
2144 word before the current chunk size contains the previous chunk
2145 size, and can be used to find the front of the previous chunk.
2146 The very first chunk allocated always has this bit set, preventing
2147 access to non-existent (or non-owned) memory. If pinuse is set for
2148 any given chunk, then you CANNOT determine the size of the
2149 previous chunk, and might even get a memory addressing fault when
2150 trying to do so.
2151
2152 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2153 the chunk size redundantly records whether the current chunk is
2154 inuse (unless the chunk is mmapped). This redundancy enables usage
2155 checks within free and realloc, and reduces indirection when freeing
2156 and consolidating chunks.
2157
2158 Each freshly allocated chunk must have both cinuse and pinuse set.
2159 That is, each allocated chunk borders either a previously allocated
2160 and still in-use chunk, or the base of its memory arena. This is
2161 ensured by making all allocations from the `lowest' part of any
2162 found chunk. Further, no free chunk physically borders another one,
2163 so each free chunk is known to be preceded and followed by either
2164 inuse chunks or the ends of memory.
2165
2166 Note that the `foot' of the current chunk is actually represented
2167 as the prev_foot of the NEXT chunk. This makes it easier to
2168 deal with alignments etc but can be very confusing when trying
2169 to extend or adapt this code.
2170
2171 The exceptions to all this are
2172
2173 1. The special chunk `top' is the top-most available chunk (i.e.,
2174 the one bordering the end of available memory). It is treated
2175 specially. Top is never included in any bin, is used only if
2176 no other chunk is available, and is released back to the
2177 system if it is very large (see M_TRIM_THRESHOLD). In effect,
2178 the top chunk is treated as larger (and thus less well
2179 fitting) than any other available chunk. The top chunk
2180 doesn't update its trailing size field since there is no next
2181 contiguous chunk that would have to index off it. However,
2182 space is still allocated for it (TOP_FOOT_SIZE) to enable
2183 separation or merging when space is extended.
2184
2185 3. Chunks allocated via mmap, have both cinuse and pinuse bits
2186 cleared in their head fields. Because they are allocated
2187 one-by-one, each must carry its own prev_foot field, which is
2188 also used to hold the offset this chunk has within its mmapped
2189 region, which is needed to preserve alignment. Each mmapped
2190 chunk is trailed by the first two fields of a fake next-chunk
2191 for sake of usage checks.
2192
2193 */
2194
2195 struct malloc_chunk {
2196 size_t prev_foot; /* Size of previous chunk (if free). */
2197 size_t head; /* Size and inuse bits. */
2198 struct malloc_chunk* fd; /* double links -- used only if free. */
2199 struct malloc_chunk* bk;
2200 };
2201
2202 typedef struct malloc_chunk mchunk;
2203 typedef struct malloc_chunk* mchunkptr;
2204 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2205 typedef unsigned int bindex_t; /* Described below */
2206 typedef unsigned int binmap_t; /* Described below */
2207 typedef unsigned int flag_t; /* The type of various bit flag sets */
2208
2209 /* ------------------- Chunks sizes and alignments ----------------------- */
2210
2211 #define MCHUNK_SIZE (sizeof(mchunk))
2212
2213 #if FOOTERS
2214 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2215 #else /* FOOTERS */
2216 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2217 #endif /* FOOTERS */
2218
2219 /* MMapped chunks need a second word of overhead ... */
2220 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2221 /* ... and additional padding for fake next-chunk at foot */
2222 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2223
2224 /* The smallest size we can malloc is an aligned minimal chunk */
2225 #define MIN_CHUNK_SIZE\
2226 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2227
2228 /* conversion from malloc headers to user pointers, and back */
2229 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2230 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2231 /* chunk associated with aligned address A */
2232 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2233
2234 /* Bounds on request (not chunk) sizes. */
2235 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2236 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2237
2238 /* pad request bytes into a usable size */
2239 #define pad_request(req) \
2240 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2241
2242 /* pad request, checking for minimum (but not maximum) */
2243 #define request2size(req) \
2244 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2245
2246
2247 /* ------------------ Operations on head and foot fields ----------------- */
2248
2249 /*
2250 The head field of a chunk is or'ed with PINUSE_BIT when previous
2251 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2252 use, unless mmapped, in which case both bits are cleared.
2253
2254 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2255 */
2256
2257 #define PINUSE_BIT (SIZE_T_ONE)
2258 #define CINUSE_BIT (SIZE_T_TWO)
2259 #define FLAG4_BIT (SIZE_T_FOUR)
2260 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2261 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2262
2263 /* Head value for fenceposts */
2264 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2265
2266 /* extraction of fields from head words */
2267 #define cinuse(p) ((p)->head & CINUSE_BIT)
2268 #define pinuse(p) ((p)->head & PINUSE_BIT)
2269 #define flag4inuse(p) ((p)->head & FLAG4_BIT)
2270 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2271 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2272
2273 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2274
2275 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2276 #define set_flag4(p) ((p)->head |= FLAG4_BIT)
2277 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2278
2279 /* Treat space at ptr +/- offset as a chunk */
2280 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2281 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2282
2283 /* Ptr to next or previous physical malloc_chunk. */
2284 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2285 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2286
2287 /* extract next chunk's pinuse bit */
2288 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2289
2290 /* Get/set size at footer */
2291 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2292 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2293
2294 /* Set size, pinuse bit, and foot */
2295 #define set_size_and_pinuse_of_free_chunk(p, s)\
2296 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2297
2298 /* Set size, pinuse bit, foot, and clear next pinuse */
2299 #define set_free_with_pinuse(p, s, n)\
2300 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2301
2302 /* Get the internal overhead associated with chunk p */
2303 #define overhead_for(p)\
2304 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2305
2306 /* Return true if malloced space is not necessarily cleared */
2307 #if MMAP_CLEARS
2308 #define calloc_must_clear(p) (!is_mmapped(p))
2309 #else /* MMAP_CLEARS */
2310 #define calloc_must_clear(p) (1)
2311 #endif /* MMAP_CLEARS */
2312
2313 /* ---------------------- Overlaid data structures ----------------------- */
2314
2315 /*
2316 When chunks are not in use, they are treated as nodes of either
2317 lists or trees.
2318
2319 "Small" chunks are stored in circular doubly-linked lists, and look
2320 like this:
2321
2322 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2323 | Size of previous chunk |
2324 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2325 `head:' | Size of chunk, in bytes |P|
2326 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2327 | Forward pointer to next chunk in list |
2328 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2329 | Back pointer to previous chunk in list |
2330 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2331 | Unused space (may be 0 bytes long) .
2332 . .
2333 . |
2334 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2335 `foot:' | Size of chunk, in bytes |
2336 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2337
2338 Larger chunks are kept in a form of bitwise digital trees (aka
2339 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2340 free chunks greater than 256 bytes, their size doesn't impose any
2341 constraints on user chunk sizes. Each node looks like:
2342
2343 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2344 | Size of previous chunk |
2345 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2346 `head:' | Size of chunk, in bytes |P|
2347 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2348 | Forward pointer to next chunk of same size |
2349 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2350 | Back pointer to previous chunk of same size |
2351 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2352 | Pointer to left child (child[0]) |
2353 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2354 | Pointer to right child (child[1]) |
2355 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2356 | Pointer to parent |
2357 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2358 | bin index of this chunk |
2359 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2360 | Unused space .
2361 . |
2362 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2363 `foot:' | Size of chunk, in bytes |
2364 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2365
2366 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2367 of the same size are arranged in a circularly-linked list, with only
2368 the oldest chunk (the next to be used, in our FIFO ordering)
2369 actually in the tree. (Tree members are distinguished by a non-null
2370 parent pointer.) If a chunk with the same size an an existing node
2371 is inserted, it is linked off the existing node using pointers that
2372 work in the same way as fd/bk pointers of small chunks.
2373
2374 Each tree contains a power of 2 sized range of chunk sizes (the
2375 smallest is 0x100 <= x < 0x180), which is is divided in half at each
2376 tree level, with the chunks in the smaller half of the range (0x100
2377 <= x < 0x140 for the top nose) in the left subtree and the larger
2378 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2379 done by inspecting individual bits.
2380
2381 Using these rules, each node's left subtree contains all smaller
2382 sizes than its right subtree. However, the node at the root of each
2383 subtree has no particular ordering relationship to either. (The
2384 dividing line between the subtree sizes is based on trie relation.)
2385 If we remove the last chunk of a given size from the interior of the
2386 tree, we need to replace it with a leaf node. The tree ordering
2387 rules permit a node to be replaced by any leaf below it.
2388
2389 The smallest chunk in a tree (a common operation in a best-fit
2390 allocator) can be found by walking a path to the leftmost leaf in
2391 the tree. Unlike a usual binary tree, where we follow left child
2392 pointers until we reach a null, here we follow the right child
2393 pointer any time the left one is null, until we reach a leaf with
2394 both child pointers null. The smallest chunk in the tree will be
2395 somewhere along that path.
2396
2397 The worst case number of steps to add, find, or remove a node is
2398 bounded by the number of bits differentiating chunks within
2399 bins. Under current bin calculations, this ranges from 6 up to 21
2400 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2401 is of course much better.
2402 */
2403
2404 struct malloc_tree_chunk {
2405 /* The first four fields must be compatible with malloc_chunk */
2406 size_t prev_foot;
2407 size_t head;
2408 struct malloc_tree_chunk* fd;
2409 struct malloc_tree_chunk* bk;
2410
2411 struct malloc_tree_chunk* child[2];
2412 struct malloc_tree_chunk* parent;
2413 bindex_t index;
2414 };
2415
2416 typedef struct malloc_tree_chunk tchunk;
2417 typedef struct malloc_tree_chunk* tchunkptr;
2418 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2419
2420 /* A little helper macro for trees */
2421 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2422
2423 /* ----------------------------- Segments -------------------------------- */
2424
2425 /*
2426 Each malloc space may include non-contiguous segments, held in a
2427 list headed by an embedded malloc_segment record representing the
2428 top-most space. Segments also include flags holding properties of
2429 the space. Large chunks that are directly allocated by mmap are not
2430 included in this list. They are instead independently created and
2431 destroyed without otherwise keeping track of them.
2432
2433 Segment management mainly comes into play for spaces allocated by
2434 MMAP. Any call to MMAP might or might not return memory that is
2435 adjacent to an existing segment. MORECORE normally contiguously
2436 extends the current space, so this space is almost always adjacent,
2437 which is simpler and faster to deal with. (This is why MORECORE is
2438 used preferentially to MMAP when both are available -- see
2439 sys_alloc.) When allocating using MMAP, we don't use any of the
2440 hinting mechanisms (inconsistently) supported in various
2441 implementations of unix mmap, or distinguish reserving from
2442 committing memory. Instead, we just ask for space, and exploit
2443 contiguity when we get it. It is probably possible to do
2444 better than this on some systems, but no general scheme seems
2445 to be significantly better.
2446
2447 Management entails a simpler variant of the consolidation scheme
2448 used for chunks to reduce fragmentation -- new adjacent memory is
2449 normally prepended or appended to an existing segment. However,
2450 there are limitations compared to chunk consolidation that mostly
2451 reflect the fact that segment processing is relatively infrequent
2452 (occurring only when getting memory from system) and that we
2453 don't expect to have huge numbers of segments:
2454
2455 * Segments are not indexed, so traversal requires linear scans. (It
2456 would be possible to index these, but is not worth the extra
2457 overhead and complexity for most programs on most platforms.)
2458 * New segments are only appended to old ones when holding top-most
2459 memory; if they cannot be prepended to others, they are held in
2460 different segments.
2461
2462 Except for the top-most segment of an mstate, each segment record
2463 is kept at the tail of its segment. Segments are added by pushing
2464 segment records onto the list headed by &mstate.seg for the
2465 containing mstate.
2466
2467 Segment flags control allocation/merge/deallocation policies:
2468 * If EXTERN_BIT set, then we did not allocate this segment,
2469 and so should not try to deallocate or merge with others.
2470 (This currently holds only for the initial segment passed
2471 into create_mspace_with_base.)
2472 * If USE_MMAP_BIT set, the segment may be merged with
2473 other surrounding mmapped segments and trimmed/de-allocated
2474 using munmap.
2475 * If neither bit is set, then the segment was obtained using
2476 MORECORE so can be merged with surrounding MORECORE'd segments
2477 and deallocated/trimmed using MORECORE with negative arguments.
2478 */
2479
2480 struct malloc_segment {
2481 char* base; /* base address */
2482 size_t size; /* allocated size */
2483 struct malloc_segment* next; /* ptr to next segment */
2484 flag_t sflags; /* mmap and extern flag */
2485 };
2486
2487 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2488 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2489
2490 typedef struct malloc_segment msegment;
2491 typedef struct malloc_segment* msegmentptr;
2492
2493 /* ---------------------------- malloc_state ----------------------------- */
2494
2495 /*
2496 A malloc_state holds all of the bookkeeping for a space.
2497 The main fields are:
2498
2499 Top
2500 The topmost chunk of the currently active segment. Its size is
2501 cached in topsize. The actual size of topmost space is
2502 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2503 fenceposts and segment records if necessary when getting more
2504 space from the system. The size at which to autotrim top is
2505 cached from mparams in trim_check, except that it is disabled if
2506 an autotrim fails.
2507
2508 Designated victim (dv)
2509 This is the preferred chunk for servicing small requests that
2510 don't have exact fits. It is normally the chunk split off most
2511 recently to service another small request. Its size is cached in
2512 dvsize. The link fields of this chunk are not maintained since it
2513 is not kept in a bin.
2514
2515 SmallBins
2516 An array of bin headers for free chunks. These bins hold chunks
2517 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2518 chunks of all the same size, spaced 8 bytes apart. To simplify
2519 use in double-linked lists, each bin header acts as a malloc_chunk
2520 pointing to the real first node, if it exists (else pointing to
2521 itself). This avoids special-casing for headers. But to avoid
2522 waste, we allocate only the fd/bk pointers of bins, and then use
2523 repositioning tricks to treat these as the fields of a chunk.
2524
2525 TreeBins
2526 Treebins are pointers to the roots of trees holding a range of
2527 sizes. There are 2 equally spaced treebins for each power of two
2528 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2529 larger.
2530
2531 Bin maps
2532 There is one bit map for small bins ("smallmap") and one for
2533 treebins ("treemap). Each bin sets its bit when non-empty, and
2534 clears the bit when empty. Bit operations are then used to avoid
2535 bin-by-bin searching -- nearly all "search" is done without ever
2536 looking at bins that won't be selected. The bit maps
2537 conservatively use 32 bits per map word, even if on 64bit system.
2538 For a good description of some of the bit-based techniques used
2539 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2540 supplement at http://hackersdelight.org/). Many of these are
2541 intended to reduce the branchiness of paths through malloc etc, as
2542 well as to reduce the number of memory locations read or written.
2543
2544 Segments
2545 A list of segments headed by an embedded malloc_segment record
2546 representing the initial space.
2547
2548 Address check support
2549 The least_addr field is the least address ever obtained from
2550 MORECORE or MMAP. Attempted frees and reallocs of any address less
2551 than this are trapped (unless INSECURE is defined).
2552
2553 Magic tag
2554 A cross-check field that should always hold same value as mparams.magic.
2555
2556 Max allowed footprint
2557 The maximum allowed bytes to allocate from system (zero means no limit)
2558
2559 Flags
2560 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2561
2562 Statistics
2563 Each space keeps track of current and maximum system memory
2564 obtained via MORECORE or MMAP.
2565
2566 Trim support
2567 Fields holding the amount of unused topmost memory that should trigger
2568 trimming, and a counter to force periodic scanning to release unused
2569 non-topmost segments.
2570
2571 Locking
2572 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2573 around every public call using this mspace.
2574
2575 Extension support
2576 A void* pointer and a size_t field that can be used to help implement
2577 extensions to this malloc.
2578 */
2579
2580 /* Bin types, widths and sizes */
2581 #define NSMALLBINS (32U)
2582 #define NTREEBINS (32U)
2583 #define SMALLBIN_SHIFT (3U)
2584 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2585 #define TREEBIN_SHIFT (8U)
2586 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2587 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2588 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2589
2590 struct malloc_state {
2591 binmap_t smallmap;
2592 binmap_t treemap;
2593 size_t dvsize;
2594 size_t topsize;
2595 char* least_addr;
2596 mchunkptr dv;
2597 mchunkptr top;
2598 size_t trim_check;
2599 size_t release_checks;
2600 size_t magic;
2601 mchunkptr smallbins[(NSMALLBINS+1)*2];
2602 tbinptr treebins[NTREEBINS];
2603 size_t footprint;
2604 size_t max_footprint;
2605 size_t footprint_limit; /* zero means no limit */
2606 flag_t mflags;
2607 #if USE_LOCKS
2608 MLOCK_T mutex; /* locate lock among fields that rarely change */
2609 #endif /* USE_LOCKS */
2610 msegment seg;
2611 void* extp; /* Unused but available for extensions */
2612 size_t exts;
2613 };
2614
2615 typedef struct malloc_state* mstate;
2616
2617 /* ------------- Global malloc_state and malloc_params ------------------- */
2618
2619 /*
2620 malloc_params holds global properties, including those that can be
2621 dynamically set using mallopt. There is a single instance, mparams,
2622 initialized in init_mparams. Note that the non-zeroness of "magic"
2623 also serves as an initialization flag.
2624 */
2625
2626 struct malloc_params {
2627 size_t magic;
2628 size_t page_size;
2629 size_t granularity;
2630 size_t mmap_threshold;
2631 size_t trim_threshold;
2632 flag_t default_mflags;
2633 };
2634
2635 static struct malloc_params mparams;
2636
2637 /* Ensure mparams initialized */
2638 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2639
2640 #if !ONLY_MSPACES
2641
2642 /* The global malloc_state used for all non-"mspace" calls */
2643 static struct malloc_state _gm_;
2644 #define gm (&_gm_)
2645 #define is_global(M) ((M) == &_gm_)
2646
2647 #endif /* !ONLY_MSPACES */
2648
2649 #define is_initialized(M) ((M)->top != 0)
2650
2651 /* -------------------------- system alloc setup ------------------------- */
2652
2653 /* Operations on mflags */
2654
2655 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2656 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2657 #if USE_LOCKS
2658 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2659 #else
2660 #define disable_lock(M)
2661 #endif
2662
2663 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2664 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2665 #if HAVE_MMAP
2666 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2667 #else
2668 #define disable_mmap(M)
2669 #endif
2670
2671 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2672 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2673
2674 #define set_lock(M,L)\
2675 ((M)->mflags = (L)?\
2676 ((M)->mflags | USE_LOCK_BIT) :\
2677 ((M)->mflags & ~USE_LOCK_BIT))
2678
2679 /* page-align a size */
2680 #define page_align(S)\
2681 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2682
2683 /* granularity-align a size */
2684 #define granularity_align(S)\
2685 (((S) + (mparams.granularity - SIZE_T_ONE))\
2686 & ~(mparams.granularity - SIZE_T_ONE))
2687
2688
2689 /* For mmap, use granularity alignment on windows, else page-align */
2690 #ifdef WIN32
2691 #define mmap_align(S) granularity_align(S)
2692 #else
2693 #define mmap_align(S) page_align(S)
2694 #endif
2695
2696 /* For sys_alloc, enough padding to ensure can malloc request on success */
2697 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2698
2699 #define is_page_aligned(S)\
2700 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2701 #define is_granularity_aligned(S)\
2702 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2703
2704 /* True if segment S holds address A */
2705 #define segment_holds(S, A)\
2706 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2707
2708 /* Return segment holding given address */
segment_holding(mstate m,char * addr)2709 static msegmentptr segment_holding(mstate m, char* addr) {
2710 msegmentptr sp = &m->seg;
2711 for (;;) {
2712 if (addr >= sp->base && addr < sp->base + sp->size)
2713 return sp;
2714 if ((sp = sp->next) == 0)
2715 return 0;
2716 }
2717 }
2718
2719 /* Return true if segment contains a segment link */
has_segment_link(mstate m,msegmentptr ss)2720 static int has_segment_link(mstate m, msegmentptr ss) {
2721 msegmentptr sp = &m->seg;
2722 for (;;) {
2723 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2724 return 1;
2725 if ((sp = sp->next) == 0)
2726 return 0;
2727 }
2728 }
2729
2730 #ifndef MORECORE_CANNOT_TRIM
2731 #define should_trim(M,s) ((s) > (M)->trim_check)
2732 #else /* MORECORE_CANNOT_TRIM */
2733 #define should_trim(M,s) (0)
2734 #endif /* MORECORE_CANNOT_TRIM */
2735
2736 /*
2737 TOP_FOOT_SIZE is padding at the end of a segment, including space
2738 that may be needed to place segment records and fenceposts when new
2739 noncontiguous segments are added.
2740 */
2741 #define TOP_FOOT_SIZE\
2742 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2743
2744
2745 /* ------------------------------- Hooks -------------------------------- */
2746
2747 /*
2748 PREACTION should be defined to return 0 on success, and nonzero on
2749 failure. If you are not using locking, you can redefine these to do
2750 anything you like.
2751 */
2752
2753 #if USE_LOCKS
2754 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2755 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2756 #else /* USE_LOCKS */
2757
2758 #ifndef PREACTION
2759 #define PREACTION(M) (0)
2760 #endif /* PREACTION */
2761
2762 #ifndef POSTACTION
2763 #define POSTACTION(M)
2764 #endif /* POSTACTION */
2765
2766 #endif /* USE_LOCKS */
2767
2768 /*
2769 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2770 USAGE_ERROR_ACTION is triggered on detected bad frees and
2771 reallocs. The argument p is an address that might have triggered the
2772 fault. It is ignored by the two predefined actions, but might be
2773 useful in custom actions that try to help diagnose errors.
2774 */
2775
2776 #if PROCEED_ON_ERROR
2777
2778 /* A count of the number of corruption errors causing resets */
2779 int malloc_corruption_error_count;
2780
2781 /* default corruption action */
2782 static void reset_on_error(mstate m);
2783
2784 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2785 #define USAGE_ERROR_ACTION(m, p)
2786
2787 #else /* PROCEED_ON_ERROR */
2788
2789 #ifndef CORRUPTION_ERROR_ACTION
2790 #define CORRUPTION_ERROR_ACTION(m) ABORT
2791 #endif /* CORRUPTION_ERROR_ACTION */
2792
2793 #ifndef USAGE_ERROR_ACTION
2794 #define USAGE_ERROR_ACTION(m,p) ABORT
2795 #endif /* USAGE_ERROR_ACTION */
2796
2797 #endif /* PROCEED_ON_ERROR */
2798
2799
2800 /* -------------------------- Debugging setup ---------------------------- */
2801
2802 #if ! DEBUG
2803
2804 #define check_free_chunk(M,P)
2805 #define check_inuse_chunk(M,P)
2806 #define check_malloced_chunk(M,P,N)
2807 #define check_mmapped_chunk(M,P)
2808 #define check_malloc_state(M)
2809 #define check_top_chunk(M,P)
2810
2811 #else /* DEBUG */
2812 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2813 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2814 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2815 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2816 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2817 #define check_malloc_state(M) do_check_malloc_state(M)
2818
2819 static void do_check_any_chunk(mstate m, mchunkptr p);
2820 static void do_check_top_chunk(mstate m, mchunkptr p);
2821 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2822 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2823 static void do_check_free_chunk(mstate m, mchunkptr p);
2824 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2825 static void do_check_tree(mstate m, tchunkptr t);
2826 static void do_check_treebin(mstate m, bindex_t i);
2827 static void do_check_smallbin(mstate m, bindex_t i);
2828 static void do_check_malloc_state(mstate m);
2829 static int bin_find(mstate m, mchunkptr x);
2830 static size_t traverse_and_check(mstate m);
2831 #endif /* DEBUG */
2832
2833 /* ---------------------------- Indexing Bins ---------------------------- */
2834
2835 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2836 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2837 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2838 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2839
2840 /* addressing by index. See above about smallbin repositioning */
2841 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2842 #define treebin_at(M,i) (&((M)->treebins[i]))
2843
2844 /* assign tree index for size S to variable I. Use x86 asm if possible */
2845 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2846 #define compute_tree_index(S, I)\
2847 {\
2848 unsigned int X = S >> TREEBIN_SHIFT;\
2849 if (X == 0)\
2850 I = 0;\
2851 else if (X > 0xFFFF)\
2852 I = NTREEBINS-1;\
2853 else {\
2854 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2855 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2856 }\
2857 }
2858
2859 #elif defined (__INTEL_COMPILER)
2860 #define compute_tree_index(S, I)\
2861 {\
2862 size_t X = S >> TREEBIN_SHIFT;\
2863 if (X == 0)\
2864 I = 0;\
2865 else if (X > 0xFFFF)\
2866 I = NTREEBINS-1;\
2867 else {\
2868 unsigned int K = _bit_scan_reverse (X); \
2869 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2870 }\
2871 }
2872
2873 #elif defined(_MSC_VER) && _MSC_VER>=1300
2874 #define compute_tree_index(S, I)\
2875 {\
2876 size_t X = S >> TREEBIN_SHIFT;\
2877 if (X == 0)\
2878 I = 0;\
2879 else if (X > 0xFFFF)\
2880 I = NTREEBINS-1;\
2881 else {\
2882 unsigned int K;\
2883 _BitScanReverse((DWORD *) &K, (DWORD) X);\
2884 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2885 }\
2886 }
2887
2888 #else /* GNUC */
2889 #define compute_tree_index(S, I)\
2890 {\
2891 size_t X = S >> TREEBIN_SHIFT;\
2892 if (X == 0)\
2893 I = 0;\
2894 else if (X > 0xFFFF)\
2895 I = NTREEBINS-1;\
2896 else {\
2897 unsigned int Y = (unsigned int)X;\
2898 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2899 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2900 N += K;\
2901 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2902 K = 14 - N + ((Y <<= K) >> 15);\
2903 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2904 }\
2905 }
2906 #endif /* GNUC */
2907
2908 /* Bit representing maximum resolved size in a treebin at i */
2909 #define bit_for_tree_index(i) \
2910 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2911
2912 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2913 #define leftshift_for_tree_index(i) \
2914 ((i == NTREEBINS-1)? 0 : \
2915 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2916
2917 /* The size of the smallest chunk held in bin with index i */
2918 #define minsize_for_tree_index(i) \
2919 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2920 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2921
2922
2923 /* ------------------------ Operations on bin maps ----------------------- */
2924
2925 /* bit corresponding to given index */
2926 #define idx2bit(i) ((binmap_t)(1) << (i))
2927
2928 /* Mark/Clear bits with given index */
2929 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2930 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2931 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2932
2933 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2934 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2935 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2936
2937 /* isolate the least set bit of a bitmap */
2938 #define least_bit(x) ((x) & -(x))
2939
2940 /* mask with all bits to left of least bit of x on */
2941 #define left_bits(x) ((x<<1) | -(x<<1))
2942
2943 /* mask with all bits to left of or equal to least bit of x on */
2944 #define same_or_left_bits(x) ((x) | -(x))
2945
2946 /* index corresponding to given bit. Use x86 asm if possible */
2947
2948 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2949 #define compute_bit2idx(X, I)\
2950 {\
2951 unsigned int J;\
2952 J = __builtin_ctz(X); \
2953 I = (bindex_t)J;\
2954 }
2955
2956 #elif defined (__INTEL_COMPILER)
2957 #define compute_bit2idx(X, I)\
2958 {\
2959 unsigned int J;\
2960 J = _bit_scan_forward (X); \
2961 I = (bindex_t)J;\
2962 }
2963
2964 #elif defined(_MSC_VER) && _MSC_VER>=1300
2965 #define compute_bit2idx(X, I)\
2966 {\
2967 unsigned int J;\
2968 _BitScanForward((DWORD *) &J, X);\
2969 I = (bindex_t)J;\
2970 }
2971
2972 #elif USE_BUILTIN_FFS
2973 #define compute_bit2idx(X, I) I = ffs(X)-1
2974
2975 #else
2976 #define compute_bit2idx(X, I)\
2977 {\
2978 unsigned int Y = X - 1;\
2979 unsigned int K = Y >> (16-4) & 16;\
2980 unsigned int N = K; Y >>= K;\
2981 N += K = Y >> (8-3) & 8; Y >>= K;\
2982 N += K = Y >> (4-2) & 4; Y >>= K;\
2983 N += K = Y >> (2-1) & 2; Y >>= K;\
2984 N += K = Y >> (1-0) & 1; Y >>= K;\
2985 I = (bindex_t)(N + Y);\
2986 }
2987 #endif /* GNUC */
2988
2989
2990 /* ----------------------- Runtime Check Support ------------------------- */
2991
2992 /*
2993 For security, the main invariant is that malloc/free/etc never
2994 writes to a static address other than malloc_state, unless static
2995 malloc_state itself has been corrupted, which cannot occur via
2996 malloc (because of these checks). In essence this means that we
2997 believe all pointers, sizes, maps etc held in malloc_state, but
2998 check all of those linked or offsetted from other embedded data
2999 structures. These checks are interspersed with main code in a way
3000 that tends to minimize their run-time cost.
3001
3002 When FOOTERS is defined, in addition to range checking, we also
3003 verify footer fields of inuse chunks, which can be used guarantee
3004 that the mstate controlling malloc/free is intact. This is a
3005 streamlined version of the approach described by William Robertson
3006 et al in "Run-time Detection of Heap-based Overflows" LISA'03
3007 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
3008 of an inuse chunk holds the xor of its mstate and a random seed,
3009 that is checked upon calls to free() and realloc(). This is
3010 (probabalistically) unguessable from outside the program, but can be
3011 computed by any code successfully malloc'ing any chunk, so does not
3012 itself provide protection against code that has already broken
3013 security through some other means. Unlike Robertson et al, we
3014 always dynamically check addresses of all offset chunks (previous,
3015 next, etc). This turns out to be cheaper than relying on hashes.
3016 */
3017
3018 #if !INSECURE
3019 /* Check if address a is at least as high as any from MORECORE or MMAP */
3020 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3021 /* Check if address of next chunk n is higher than base chunk p */
3022 #define ok_next(p, n) ((char*)(p) < (char*)(n))
3023 /* Check if p has inuse status */
3024 #define ok_inuse(p) is_inuse(p)
3025 /* Check if p has its pinuse bit on */
3026 #define ok_pinuse(p) pinuse(p)
3027
3028 #else /* !INSECURE */
3029 #define ok_address(M, a) (1)
3030 #define ok_next(b, n) (1)
3031 #define ok_inuse(p) (1)
3032 #define ok_pinuse(p) (1)
3033 #endif /* !INSECURE */
3034
3035 #if (FOOTERS && !INSECURE)
3036 /* Check if (alleged) mstate m has expected magic field */
3037 #define ok_magic(M) ((M)->magic == mparams.magic)
3038 #else /* (FOOTERS && !INSECURE) */
3039 #define ok_magic(M) (1)
3040 #endif /* (FOOTERS && !INSECURE) */
3041
3042 /* In gcc, use __builtin_expect to minimize impact of checks */
3043 #if !INSECURE
3044 #if defined(__GNUC__) && __GNUC__ >= 3
3045 #define RTCHECK(e) __builtin_expect(e, 1)
3046 #else /* GNUC */
3047 #define RTCHECK(e) (e)
3048 #endif /* GNUC */
3049 #else /* !INSECURE */
3050 #define RTCHECK(e) (1)
3051 #endif /* !INSECURE */
3052
3053 /* macros to set up inuse chunks with or without footers */
3054
3055 #if !FOOTERS
3056
3057 #define mark_inuse_foot(M,p,s)
3058
3059 /* Macros for setting head/foot of non-mmapped chunks */
3060
3061 /* Set cinuse bit and pinuse bit of next chunk */
3062 #define set_inuse(M,p,s)\
3063 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3064 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3065
3066 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3067 #define set_inuse_and_pinuse(M,p,s)\
3068 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3069 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3070
3071 /* Set size, cinuse and pinuse bit of this chunk */
3072 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3073 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3074
3075 #else /* FOOTERS */
3076
3077 /* Set foot of inuse chunk to be xor of mstate and seed */
3078 #define mark_inuse_foot(M,p,s)\
3079 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3080
3081 #define get_mstate_for(p)\
3082 ((mstate)(((mchunkptr)((char*)(p) +\
3083 (chunksize(p))))->prev_foot ^ mparams.magic))
3084
3085 #define set_inuse(M,p,s)\
3086 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3087 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3088 mark_inuse_foot(M,p,s))
3089
3090 #define set_inuse_and_pinuse(M,p,s)\
3091 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3092 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3093 mark_inuse_foot(M,p,s))
3094
3095 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3096 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3097 mark_inuse_foot(M, p, s))
3098
3099 #endif /* !FOOTERS */
3100
3101 /* ---------------------------- setting mparams -------------------------- */
3102
3103 #if LOCK_AT_FORK
pre_fork(void)3104 static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); }
post_fork_parent(void)3105 static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
post_fork_child(void)3106 static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); }
3107 #endif /* LOCK_AT_FORK */
3108
3109 /* Initialize mparams */
init_mparams(void)3110 static int init_mparams(void) {
3111 #ifdef NEED_GLOBAL_LOCK_INIT
3112 if (malloc_global_mutex_status <= 0)
3113 init_malloc_global_mutex();
3114 #endif
3115
3116 ACQUIRE_MALLOC_GLOBAL_LOCK();
3117 if (mparams.magic == 0) {
3118 size_t magic;
3119 size_t psize;
3120 size_t gsize;
3121
3122 #ifndef WIN32
3123 psize = malloc_getpagesize;
3124 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3125 #else /* WIN32 */
3126 {
3127 SYSTEM_INFO system_info;
3128 GetSystemInfo(&system_info);
3129 psize = system_info.dwPageSize;
3130 gsize = ((DEFAULT_GRANULARITY != 0)?
3131 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3132 }
3133 #endif /* WIN32 */
3134
3135 /* Sanity-check configuration:
3136 size_t must be unsigned and as wide as pointer type.
3137 ints must be at least 4 bytes.
3138 alignment must be at least 8.
3139 Alignment, min chunk size, and page size must all be powers of 2.
3140 */
3141 if ((sizeof(size_t) != sizeof(char*)) ||
3142 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
3143 (sizeof(int) < 4) ||
3144 (MALLOC_ALIGNMENT < (size_t)8U) ||
3145 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3146 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
3147 ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
3148 ((psize & (psize-SIZE_T_ONE)) != 0))
3149 ABORT;
3150 mparams.granularity = gsize;
3151 mparams.page_size = psize;
3152 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3153 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3154 #if MORECORE_CONTIGUOUS
3155 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3156 #else /* MORECORE_CONTIGUOUS */
3157 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3158 #endif /* MORECORE_CONTIGUOUS */
3159
3160 #if !ONLY_MSPACES
3161 /* Set up lock for main malloc area */
3162 gm->mflags = mparams.default_mflags;
3163 (void)INITIAL_LOCK(&gm->mutex);
3164 #endif
3165 #if LOCK_AT_FORK
3166 pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3167 #endif
3168
3169 {
3170 #if USE_DEV_RANDOM
3171 int fd;
3172 unsigned char buf[sizeof(size_t)];
3173 /* Try to use /dev/urandom, else fall back on using time */
3174 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3175 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3176 magic = *((size_t *) buf);
3177 close(fd);
3178 }
3179 else
3180 #endif /* USE_DEV_RANDOM */
3181 #ifdef WIN32
3182 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3183 #elif defined(LACKS_TIME_H)
3184 magic = (size_t)&magic ^ (size_t)0x55555555U;
3185 #else
3186 magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3187 #endif
3188 magic |= (size_t)8U; /* ensure nonzero */
3189 magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3190 /* Until memory modes commonly available, use volatile-write */
3191 (*(volatile size_t *)(&(mparams.magic))) = magic;
3192 }
3193 }
3194
3195 RELEASE_MALLOC_GLOBAL_LOCK();
3196 return 1;
3197 }
3198
3199 /* support for mallopt */
change_mparam(int param_number,int value)3200 static int change_mparam(int param_number, int value) {
3201 size_t val;
3202 ensure_initialization();
3203 val = (value == -1)? MAX_SIZE_T : (size_t)value;
3204 switch(param_number) {
3205 case M_TRIM_THRESHOLD:
3206 mparams.trim_threshold = val;
3207 return 1;
3208 case M_GRANULARITY:
3209 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3210 mparams.granularity = val;
3211 return 1;
3212 }
3213 else
3214 return 0;
3215 case M_MMAP_THRESHOLD:
3216 mparams.mmap_threshold = val;
3217 return 1;
3218 default:
3219 return 0;
3220 }
3221 }
3222
3223 #if DEBUG
3224 /* ------------------------- Debugging Support --------------------------- */
3225
3226 /* Check properties of any chunk, whether free, inuse, mmapped etc */
do_check_any_chunk(mstate m,mchunkptr p)3227 static void do_check_any_chunk(mstate m, mchunkptr p) {
3228 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3229 assert(ok_address(m, p));
3230 }
3231
3232 /* Check properties of top chunk */
do_check_top_chunk(mstate m,mchunkptr p)3233 static void do_check_top_chunk(mstate m, mchunkptr p) {
3234 msegmentptr sp = segment_holding(m, (char*)p);
3235 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3236 assert(sp != 0);
3237 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3238 assert(ok_address(m, p));
3239 assert(sz == m->topsize);
3240 assert(sz > 0);
3241 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3242 assert(pinuse(p));
3243 assert(!pinuse(chunk_plus_offset(p, sz)));
3244 }
3245
3246 /* Check properties of (inuse) mmapped chunks */
do_check_mmapped_chunk(mstate m,mchunkptr p)3247 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3248 size_t sz = chunksize(p);
3249 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3250 assert(is_mmapped(p));
3251 assert(use_mmap(m));
3252 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3253 assert(ok_address(m, p));
3254 assert(!is_small(sz));
3255 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3256 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3257 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3258 }
3259
3260 /* Check properties of inuse chunks */
do_check_inuse_chunk(mstate m,mchunkptr p)3261 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3262 do_check_any_chunk(m, p);
3263 assert(is_inuse(p));
3264 assert(next_pinuse(p));
3265 /* If not pinuse and not mmapped, previous chunk has OK offset */
3266 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3267 if (is_mmapped(p))
3268 do_check_mmapped_chunk(m, p);
3269 }
3270
3271 /* Check properties of free chunks */
do_check_free_chunk(mstate m,mchunkptr p)3272 static void do_check_free_chunk(mstate m, mchunkptr p) {
3273 size_t sz = chunksize(p);
3274 mchunkptr next = chunk_plus_offset(p, sz);
3275 do_check_any_chunk(m, p);
3276 assert(!is_inuse(p));
3277 assert(!next_pinuse(p));
3278 assert (!is_mmapped(p));
3279 if (p != m->dv && p != m->top) {
3280 if (sz >= MIN_CHUNK_SIZE) {
3281 assert((sz & CHUNK_ALIGN_MASK) == 0);
3282 assert(is_aligned(chunk2mem(p)));
3283 assert(next->prev_foot == sz);
3284 assert(pinuse(p));
3285 assert (next == m->top || is_inuse(next));
3286 assert(p->fd->bk == p);
3287 assert(p->bk->fd == p);
3288 }
3289 else /* markers are always of size SIZE_T_SIZE */
3290 assert(sz == SIZE_T_SIZE);
3291 }
3292 }
3293
3294 /* Check properties of malloced chunks at the point they are malloced */
do_check_malloced_chunk(mstate m,void * mem,size_t s)3295 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3296 if (mem != 0) {
3297 mchunkptr p = mem2chunk(mem);
3298 size_t sz = p->head & ~INUSE_BITS;
3299 do_check_inuse_chunk(m, p);
3300 assert((sz & CHUNK_ALIGN_MASK) == 0);
3301 assert(sz >= MIN_CHUNK_SIZE);
3302 assert(sz >= s);
3303 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3304 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3305 }
3306 }
3307
3308 /* Check a tree and its subtrees. */
do_check_tree(mstate m,tchunkptr t)3309 static void do_check_tree(mstate m, tchunkptr t) {
3310 tchunkptr head = 0;
3311 tchunkptr u = t;
3312 bindex_t tindex = t->index;
3313 size_t tsize = chunksize(t);
3314 bindex_t idx;
3315 compute_tree_index(tsize, idx);
3316 assert(tindex == idx);
3317 assert(tsize >= MIN_LARGE_SIZE);
3318 assert(tsize >= minsize_for_tree_index(idx));
3319 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3320
3321 do { /* traverse through chain of same-sized nodes */
3322 do_check_any_chunk(m, ((mchunkptr)u));
3323 assert(u->index == tindex);
3324 assert(chunksize(u) == tsize);
3325 assert(!is_inuse(u));
3326 assert(!next_pinuse(u));
3327 assert(u->fd->bk == u);
3328 assert(u->bk->fd == u);
3329 if (u->parent == 0) {
3330 assert(u->child[0] == 0);
3331 assert(u->child[1] == 0);
3332 }
3333 else {
3334 assert(head == 0); /* only one node on chain has parent */
3335 head = u;
3336 assert(u->parent != u);
3337 assert (u->parent->child[0] == u ||
3338 u->parent->child[1] == u ||
3339 *((tbinptr*)(u->parent)) == u);
3340 if (u->child[0] != 0) {
3341 assert(u->child[0]->parent == u);
3342 assert(u->child[0] != u);
3343 do_check_tree(m, u->child[0]);
3344 }
3345 if (u->child[1] != 0) {
3346 assert(u->child[1]->parent == u);
3347 assert(u->child[1] != u);
3348 do_check_tree(m, u->child[1]);
3349 }
3350 if (u->child[0] != 0 && u->child[1] != 0) {
3351 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3352 }
3353 }
3354 u = u->fd;
3355 } while (u != t);
3356 assert(head != 0);
3357 }
3358
3359 /* Check all the chunks in a treebin. */
do_check_treebin(mstate m,bindex_t i)3360 static void do_check_treebin(mstate m, bindex_t i) {
3361 tbinptr* tb = treebin_at(m, i);
3362 tchunkptr t = *tb;
3363 int empty = (m->treemap & (1U << i)) == 0;
3364 if (t == 0)
3365 assert(empty);
3366 if (!empty)
3367 do_check_tree(m, t);
3368 }
3369
3370 /* Check all the chunks in a smallbin. */
do_check_smallbin(mstate m,bindex_t i)3371 static void do_check_smallbin(mstate m, bindex_t i) {
3372 sbinptr b = smallbin_at(m, i);
3373 mchunkptr p = b->bk;
3374 unsigned int empty = (m->smallmap & (1U << i)) == 0;
3375 if (p == b)
3376 assert(empty);
3377 if (!empty) {
3378 for (; p != b; p = p->bk) {
3379 size_t size = chunksize(p);
3380 mchunkptr q;
3381 /* each chunk claims to be free */
3382 do_check_free_chunk(m, p);
3383 /* chunk belongs in bin */
3384 assert(small_index(size) == i);
3385 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3386 /* chunk is followed by an inuse chunk */
3387 q = next_chunk(p);
3388 if (q->head != FENCEPOST_HEAD)
3389 do_check_inuse_chunk(m, q);
3390 }
3391 }
3392 }
3393
3394 /* Find x in a bin. Used in other check functions. */
bin_find(mstate m,mchunkptr x)3395 static int bin_find(mstate m, mchunkptr x) {
3396 size_t size = chunksize(x);
3397 if (is_small(size)) {
3398 bindex_t sidx = small_index(size);
3399 sbinptr b = smallbin_at(m, sidx);
3400 if (smallmap_is_marked(m, sidx)) {
3401 mchunkptr p = b;
3402 do {
3403 if (p == x)
3404 return 1;
3405 } while ((p = p->fd) != b);
3406 }
3407 }
3408 else {
3409 bindex_t tidx;
3410 compute_tree_index(size, tidx);
3411 if (treemap_is_marked(m, tidx)) {
3412 tchunkptr t = *treebin_at(m, tidx);
3413 size_t sizebits = size << leftshift_for_tree_index(tidx);
3414 while (t != 0 && chunksize(t) != size) {
3415 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3416 sizebits <<= 1;
3417 }
3418 if (t != 0) {
3419 tchunkptr u = t;
3420 do {
3421 if (u == (tchunkptr)x)
3422 return 1;
3423 } while ((u = u->fd) != t);
3424 }
3425 }
3426 }
3427 return 0;
3428 }
3429
3430 /* Traverse each chunk and check it; return total */
traverse_and_check(mstate m)3431 static size_t traverse_and_check(mstate m) {
3432 size_t sum = 0;
3433 if (is_initialized(m)) {
3434 msegmentptr s = &m->seg;
3435 sum += m->topsize + TOP_FOOT_SIZE;
3436 while (s != 0) {
3437 mchunkptr q = align_as_chunk(s->base);
3438 mchunkptr lastq = 0;
3439 assert(pinuse(q));
3440 while (segment_holds(s, q) &&
3441 q != m->top && q->head != FENCEPOST_HEAD) {
3442 sum += chunksize(q);
3443 if (is_inuse(q)) {
3444 assert(!bin_find(m, q));
3445 do_check_inuse_chunk(m, q);
3446 }
3447 else {
3448 assert(q == m->dv || bin_find(m, q));
3449 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3450 do_check_free_chunk(m, q);
3451 }
3452 lastq = q;
3453 q = next_chunk(q);
3454 }
3455 s = s->next;
3456 }
3457 }
3458 return sum;
3459 }
3460
3461
3462 /* Check all properties of malloc_state. */
do_check_malloc_state(mstate m)3463 static void do_check_malloc_state(mstate m) {
3464 bindex_t i;
3465 size_t total;
3466 /* check bins */
3467 for (i = 0; i < NSMALLBINS; ++i)
3468 do_check_smallbin(m, i);
3469 for (i = 0; i < NTREEBINS; ++i)
3470 do_check_treebin(m, i);
3471
3472 if (m->dvsize != 0) { /* check dv chunk */
3473 do_check_any_chunk(m, m->dv);
3474 assert(m->dvsize == chunksize(m->dv));
3475 assert(m->dvsize >= MIN_CHUNK_SIZE);
3476 assert(bin_find(m, m->dv) == 0);
3477 }
3478
3479 if (m->top != 0) { /* check top chunk */
3480 do_check_top_chunk(m, m->top);
3481 /*assert(m->topsize == chunksize(m->top)); redundant */
3482 assert(m->topsize > 0);
3483 assert(bin_find(m, m->top) == 0);
3484 }
3485
3486 total = traverse_and_check(m);
3487 assert(total <= m->footprint);
3488 assert(m->footprint <= m->max_footprint);
3489 }
3490 #endif /* DEBUG */
3491
3492 /* ----------------------------- statistics ------------------------------ */
3493
3494 #if !NO_MALLINFO
internal_mallinfo(mstate m)3495 static struct mallinfo internal_mallinfo(mstate m) {
3496 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3497 ensure_initialization();
3498 if (!PREACTION(m)) {
3499 check_malloc_state(m);
3500 if (is_initialized(m)) {
3501 size_t nfree = SIZE_T_ONE; /* top always free */
3502 size_t mfree = m->topsize + TOP_FOOT_SIZE;
3503 size_t sum = mfree;
3504 msegmentptr s = &m->seg;
3505 while (s != 0) {
3506 mchunkptr q = align_as_chunk(s->base);
3507 while (segment_holds(s, q) &&
3508 q != m->top && q->head != FENCEPOST_HEAD) {
3509 size_t sz = chunksize(q);
3510 sum += sz;
3511 if (!is_inuse(q)) {
3512 mfree += sz;
3513 ++nfree;
3514 }
3515 q = next_chunk(q);
3516 }
3517 s = s->next;
3518 }
3519
3520 nm.arena = sum;
3521 nm.ordblks = nfree;
3522 nm.hblkhd = m->footprint - sum;
3523 nm.usmblks = m->max_footprint;
3524 nm.uordblks = m->footprint - mfree;
3525 nm.fordblks = mfree;
3526 nm.keepcost = m->topsize;
3527 }
3528
3529 POSTACTION(m);
3530 }
3531 return nm;
3532 }
3533 #endif /* !NO_MALLINFO */
3534
3535 #if !NO_MALLOC_STATS
internal_malloc_stats(mstate m)3536 static void internal_malloc_stats(mstate m) {
3537 ensure_initialization();
3538 if (!PREACTION(m)) {
3539 size_t maxfp = 0;
3540 size_t fp = 0;
3541 size_t used = 0;
3542 check_malloc_state(m);
3543 if (is_initialized(m)) {
3544 msegmentptr s = &m->seg;
3545 maxfp = m->max_footprint;
3546 fp = m->footprint;
3547 used = fp - (m->topsize + TOP_FOOT_SIZE);
3548
3549 while (s != 0) {
3550 mchunkptr q = align_as_chunk(s->base);
3551 while (segment_holds(s, q) &&
3552 q != m->top && q->head != FENCEPOST_HEAD) {
3553 if (!is_inuse(q))
3554 used -= chunksize(q);
3555 q = next_chunk(q);
3556 }
3557 s = s->next;
3558 }
3559 }
3560 POSTACTION(m); /* drop lock */
3561 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3562 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3563 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3564 }
3565 }
3566 #endif /* NO_MALLOC_STATS */
3567
3568 /* ----------------------- Operations on smallbins ----------------------- */
3569
3570 /*
3571 Various forms of linking and unlinking are defined as macros. Even
3572 the ones for trees, which are very long but have very short typical
3573 paths. This is ugly but reduces reliance on inlining support of
3574 compilers.
3575 */
3576
3577 /* Link a free chunk into a smallbin */
3578 #define insert_small_chunk(M, P, S) {\
3579 bindex_t I = small_index(S);\
3580 mchunkptr B = smallbin_at(M, I);\
3581 mchunkptr F = B;\
3582 assert(S >= MIN_CHUNK_SIZE);\
3583 if (!smallmap_is_marked(M, I))\
3584 mark_smallmap(M, I);\
3585 else if (RTCHECK(ok_address(M, B->fd)))\
3586 F = B->fd;\
3587 else {\
3588 CORRUPTION_ERROR_ACTION(M);\
3589 }\
3590 B->fd = P;\
3591 F->bk = P;\
3592 P->fd = F;\
3593 P->bk = B;\
3594 }
3595
3596 /* Unlink a chunk from a smallbin */
3597 #define unlink_small_chunk(M, P, S) {\
3598 mchunkptr F = P->fd;\
3599 mchunkptr B = P->bk;\
3600 bindex_t I = small_index(S);\
3601 assert(P != B);\
3602 assert(P != F);\
3603 assert(chunksize(P) == small_index2size(I));\
3604 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3605 if (B == F) {\
3606 clear_smallmap(M, I);\
3607 }\
3608 else if (RTCHECK(B == smallbin_at(M,I) ||\
3609 (ok_address(M, B) && B->fd == P))) {\
3610 F->bk = B;\
3611 B->fd = F;\
3612 }\
3613 else {\
3614 CORRUPTION_ERROR_ACTION(M);\
3615 }\
3616 }\
3617 else {\
3618 CORRUPTION_ERROR_ACTION(M);\
3619 }\
3620 }
3621
3622 /* Unlink the first chunk from a smallbin */
3623 #define unlink_first_small_chunk(M, B, P, I) {\
3624 mchunkptr F = P->fd;\
3625 assert(P != B);\
3626 assert(P != F);\
3627 assert(chunksize(P) == small_index2size(I));\
3628 if (B == F) {\
3629 clear_smallmap(M, I);\
3630 }\
3631 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3632 F->bk = B;\
3633 B->fd = F;\
3634 }\
3635 else {\
3636 CORRUPTION_ERROR_ACTION(M);\
3637 }\
3638 }
3639
3640 /* Replace dv node, binning the old one */
3641 /* Used only when dvsize known to be small */
3642 #define replace_dv(M, P, S) {\
3643 size_t DVS = M->dvsize;\
3644 assert(is_small(DVS));\
3645 if (DVS != 0) {\
3646 mchunkptr DV = M->dv;\
3647 insert_small_chunk(M, DV, DVS);\
3648 }\
3649 M->dvsize = S;\
3650 M->dv = P;\
3651 }
3652
3653 /* ------------------------- Operations on trees ------------------------- */
3654
3655 /* Insert chunk into tree */
3656 #define insert_large_chunk(M, X, S) {\
3657 tbinptr* H;\
3658 bindex_t I;\
3659 compute_tree_index(S, I);\
3660 H = treebin_at(M, I);\
3661 X->index = I;\
3662 X->child[0] = X->child[1] = 0;\
3663 if (!treemap_is_marked(M, I)) {\
3664 mark_treemap(M, I);\
3665 *H = X;\
3666 X->parent = (tchunkptr)H;\
3667 X->fd = X->bk = X;\
3668 }\
3669 else {\
3670 tchunkptr T = *H;\
3671 size_t K = S << leftshift_for_tree_index(I);\
3672 for (;;) {\
3673 if (chunksize(T) != S) {\
3674 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3675 K <<= 1;\
3676 if (*C != 0)\
3677 T = *C;\
3678 else if (RTCHECK(ok_address(M, C))) {\
3679 *C = X;\
3680 X->parent = T;\
3681 X->fd = X->bk = X;\
3682 break;\
3683 }\
3684 else {\
3685 CORRUPTION_ERROR_ACTION(M);\
3686 break;\
3687 }\
3688 }\
3689 else {\
3690 tchunkptr F = T->fd;\
3691 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3692 T->fd = F->bk = X;\
3693 X->fd = F;\
3694 X->bk = T;\
3695 X->parent = 0;\
3696 break;\
3697 }\
3698 else {\
3699 CORRUPTION_ERROR_ACTION(M);\
3700 break;\
3701 }\
3702 }\
3703 }\
3704 }\
3705 }
3706
3707 /*
3708 Unlink steps:
3709
3710 1. If x is a chained node, unlink it from its same-sized fd/bk links
3711 and choose its bk node as its replacement.
3712 2. If x was the last node of its size, but not a leaf node, it must
3713 be replaced with a leaf node (not merely one with an open left or
3714 right), to make sure that lefts and rights of descendents
3715 correspond properly to bit masks. We use the rightmost descendent
3716 of x. We could use any other leaf, but this is easy to locate and
3717 tends to counteract removal of leftmosts elsewhere, and so keeps
3718 paths shorter than minimally guaranteed. This doesn't loop much
3719 because on average a node in a tree is near the bottom.
3720 3. If x is the base of a chain (i.e., has parent links) relink
3721 x's parent and children to x's replacement (or null if none).
3722 */
3723
3724 #define unlink_large_chunk(M, X) {\
3725 tchunkptr XP = X->parent;\
3726 tchunkptr R;\
3727 if (X->bk != X) {\
3728 tchunkptr F = X->fd;\
3729 R = X->bk;\
3730 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3731 F->bk = R;\
3732 R->fd = F;\
3733 }\
3734 else {\
3735 CORRUPTION_ERROR_ACTION(M);\
3736 }\
3737 }\
3738 else {\
3739 tchunkptr* RP;\
3740 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3741 ((R = *(RP = &(X->child[0]))) != 0)) {\
3742 tchunkptr* CP;\
3743 while ((*(CP = &(R->child[1])) != 0) ||\
3744 (*(CP = &(R->child[0])) != 0)) {\
3745 R = *(RP = CP);\
3746 }\
3747 if (RTCHECK(ok_address(M, RP)))\
3748 *RP = 0;\
3749 else {\
3750 CORRUPTION_ERROR_ACTION(M);\
3751 }\
3752 }\
3753 }\
3754 if (XP != 0) {\
3755 tbinptr* H = treebin_at(M, X->index);\
3756 if (X == *H) {\
3757 if ((*H = R) == 0) \
3758 clear_treemap(M, X->index);\
3759 }\
3760 else if (RTCHECK(ok_address(M, XP))) {\
3761 if (XP->child[0] == X) \
3762 XP->child[0] = R;\
3763 else \
3764 XP->child[1] = R;\
3765 }\
3766 else\
3767 CORRUPTION_ERROR_ACTION(M);\
3768 if (R != 0) {\
3769 if (RTCHECK(ok_address(M, R))) {\
3770 tchunkptr C0, C1;\
3771 R->parent = XP;\
3772 if ((C0 = X->child[0]) != 0) {\
3773 if (RTCHECK(ok_address(M, C0))) {\
3774 R->child[0] = C0;\
3775 C0->parent = R;\
3776 }\
3777 else\
3778 CORRUPTION_ERROR_ACTION(M);\
3779 }\
3780 if ((C1 = X->child[1]) != 0) {\
3781 if (RTCHECK(ok_address(M, C1))) {\
3782 R->child[1] = C1;\
3783 C1->parent = R;\
3784 }\
3785 else\
3786 CORRUPTION_ERROR_ACTION(M);\
3787 }\
3788 }\
3789 else\
3790 CORRUPTION_ERROR_ACTION(M);\
3791 }\
3792 }\
3793 }
3794
3795 /* Relays to large vs small bin operations */
3796
3797 #define insert_chunk(M, P, S)\
3798 if (is_small(S)) insert_small_chunk(M, P, S)\
3799 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3800
3801 #define unlink_chunk(M, P, S)\
3802 if (is_small(S)) unlink_small_chunk(M, P, S)\
3803 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3804
3805
3806 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3807
3808 #if ONLY_MSPACES
3809 #define internal_malloc(m, b) mspace_malloc(m, b)
3810 #define internal_free(m, mem) mspace_free(m,mem);
3811 #else /* ONLY_MSPACES */
3812 #if MSPACES
3813 #define internal_malloc(m, b)\
3814 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3815 #define internal_free(m, mem)\
3816 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3817 #else /* MSPACES */
3818 #define internal_malloc(m, b) dlmalloc(b)
3819 #define internal_free(m, mem) dlfree(mem)
3820 #endif /* MSPACES */
3821 #endif /* ONLY_MSPACES */
3822
3823 /* ----------------------- Direct-mmapping chunks ----------------------- */
3824
3825 /*
3826 Directly mmapped chunks are set up with an offset to the start of
3827 the mmapped region stored in the prev_foot field of the chunk. This
3828 allows reconstruction of the required argument to MUNMAP when freed,
3829 and also allows adjustment of the returned chunk to meet alignment
3830 requirements (especially in memalign).
3831 */
3832
3833 /* Malloc using mmap */
mmap_alloc(mstate m,size_t nb)3834 static void* mmap_alloc(mstate m, size_t nb) {
3835 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3836 if (m->footprint_limit != 0) {
3837 size_t fp = m->footprint + mmsize;
3838 if (fp <= m->footprint || fp > m->footprint_limit)
3839 return 0;
3840 }
3841 if (mmsize > nb) { /* Check for wrap around 0 */
3842 char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3843 if (mm != CMFAIL) {
3844 size_t offset = align_offset(chunk2mem(mm));
3845 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3846 mchunkptr p = (mchunkptr)(mm + offset);
3847 p->prev_foot = offset;
3848 p->head = psize;
3849 mark_inuse_foot(m, p, psize);
3850 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3851 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3852
3853 if (m->least_addr == 0 || mm < m->least_addr)
3854 m->least_addr = mm;
3855 if ((m->footprint += mmsize) > m->max_footprint)
3856 m->max_footprint = m->footprint;
3857 assert(is_aligned(chunk2mem(p)));
3858 check_mmapped_chunk(m, p);
3859 return chunk2mem(p);
3860 }
3861 }
3862 return 0;
3863 }
3864
3865 /* Realloc using mmap */
mmap_resize(mstate m,mchunkptr oldp,size_t nb,int flags)3866 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
3867 size_t oldsize = chunksize(oldp);
3868 (void)flags; /* placate people compiling -Wunused */
3869 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3870 return 0;
3871 /* Keep old chunk if big enough but not too big */
3872 if (oldsize >= nb + SIZE_T_SIZE &&
3873 (oldsize - nb) <= (mparams.granularity << 1))
3874 return oldp;
3875 else {
3876 size_t offset = oldp->prev_foot;
3877 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3878 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3879 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3880 oldmmsize, newmmsize, flags);
3881 if (cp != CMFAIL) {
3882 mchunkptr newp = (mchunkptr)(cp + offset);
3883 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3884 newp->head = psize;
3885 mark_inuse_foot(m, newp, psize);
3886 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3887 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3888
3889 if (cp < m->least_addr)
3890 m->least_addr = cp;
3891 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3892 m->max_footprint = m->footprint;
3893 check_mmapped_chunk(m, newp);
3894 return newp;
3895 }
3896 }
3897 return 0;
3898 }
3899
3900
3901 /* -------------------------- mspace management -------------------------- */
3902
3903 /* Initialize top chunk and its size */
init_top(mstate m,mchunkptr p,size_t psize)3904 static void init_top(mstate m, mchunkptr p, size_t psize) {
3905 /* Ensure alignment */
3906 size_t offset = align_offset(chunk2mem(p));
3907 p = (mchunkptr)((char*)p + offset);
3908 psize -= offset;
3909
3910 m->top = p;
3911 m->topsize = psize;
3912 p->head = psize | PINUSE_BIT;
3913 /* set size of fake trailing chunk holding overhead space only once */
3914 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3915 m->trim_check = mparams.trim_threshold; /* reset on each update */
3916 }
3917
3918 /* Initialize bins for a new mstate that is otherwise zeroed out */
init_bins(mstate m)3919 static void init_bins(mstate m) {
3920 /* Establish circular links for smallbins */
3921 bindex_t i;
3922 for (i = 0; i < NSMALLBINS; ++i) {
3923 sbinptr bin = smallbin_at(m,i);
3924 bin->fd = bin->bk = bin;
3925 }
3926 }
3927
3928 #if PROCEED_ON_ERROR
3929
3930 /* default corruption action */
reset_on_error(mstate m)3931 static void reset_on_error(mstate m) {
3932 int i;
3933 ++malloc_corruption_error_count;
3934 /* Reinitialize fields to forget about all memory */
3935 m->smallmap = m->treemap = 0;
3936 m->dvsize = m->topsize = 0;
3937 m->seg.base = 0;
3938 m->seg.size = 0;
3939 m->seg.next = 0;
3940 m->top = m->dv = 0;
3941 for (i = 0; i < NTREEBINS; ++i)
3942 *treebin_at(m, i) = 0;
3943 init_bins(m);
3944 }
3945 #endif /* PROCEED_ON_ERROR */
3946
3947 /* Allocate chunk and prepend remainder with chunk in successor base. */
prepend_alloc(mstate m,char * newbase,char * oldbase,size_t nb)3948 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3949 size_t nb) {
3950 mchunkptr p = align_as_chunk(newbase);
3951 mchunkptr oldfirst = align_as_chunk(oldbase);
3952 size_t psize = (char*)oldfirst - (char*)p;
3953 mchunkptr q = chunk_plus_offset(p, nb);
3954 size_t qsize = psize - nb;
3955 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3956
3957 assert((char*)oldfirst > (char*)q);
3958 assert(pinuse(oldfirst));
3959 assert(qsize >= MIN_CHUNK_SIZE);
3960
3961 /* consolidate remainder with first chunk of old base */
3962 if (oldfirst == m->top) {
3963 size_t tsize = m->topsize += qsize;
3964 m->top = q;
3965 q->head = tsize | PINUSE_BIT;
3966 check_top_chunk(m, q);
3967 }
3968 else if (oldfirst == m->dv) {
3969 size_t dsize = m->dvsize += qsize;
3970 m->dv = q;
3971 set_size_and_pinuse_of_free_chunk(q, dsize);
3972 }
3973 else {
3974 if (!is_inuse(oldfirst)) {
3975 size_t nsize = chunksize(oldfirst);
3976 unlink_chunk(m, oldfirst, nsize);
3977 oldfirst = chunk_plus_offset(oldfirst, nsize);
3978 qsize += nsize;
3979 }
3980 set_free_with_pinuse(q, qsize, oldfirst);
3981 insert_chunk(m, q, qsize);
3982 check_free_chunk(m, q);
3983 }
3984
3985 check_malloced_chunk(m, chunk2mem(p), nb);
3986 return chunk2mem(p);
3987 }
3988
3989 /* Add a segment to hold a new noncontiguous region */
add_segment(mstate m,char * tbase,size_t tsize,flag_t mmapped)3990 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3991 /* Determine locations and sizes of segment, fenceposts, old top */
3992 char* old_top = (char*)m->top;
3993 msegmentptr oldsp = segment_holding(m, old_top);
3994 char* old_end = oldsp->base + oldsp->size;
3995 size_t ssize = pad_request(sizeof(struct malloc_segment));
3996 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3997 size_t offset = align_offset(chunk2mem(rawsp));
3998 char* asp = rawsp + offset;
3999 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
4000 mchunkptr sp = (mchunkptr)csp;
4001 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
4002 mchunkptr tnext = chunk_plus_offset(sp, ssize);
4003 mchunkptr p = tnext;
4004 int nfences = 0;
4005
4006 /* reset top to new space */
4007 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4008
4009 /* Set up segment record */
4010 assert(is_aligned(ss));
4011 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
4012 *ss = m->seg; /* Push current record */
4013 m->seg.base = tbase;
4014 m->seg.size = tsize;
4015 m->seg.sflags = mmapped;
4016 m->seg.next = ss;
4017
4018 /* Insert trailing fenceposts */
4019 for (;;) {
4020 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
4021 p->head = FENCEPOST_HEAD;
4022 ++nfences;
4023 if ((char*)(&(nextp->head)) < old_end)
4024 p = nextp;
4025 else
4026 break;
4027 }
4028 assert(nfences >= 2);
4029
4030 /* Insert the rest of old top into a bin as an ordinary free chunk */
4031 if (csp != old_top) {
4032 mchunkptr q = (mchunkptr)old_top;
4033 size_t psize = csp - old_top;
4034 mchunkptr tn = chunk_plus_offset(q, psize);
4035 set_free_with_pinuse(q, psize, tn);
4036 insert_chunk(m, q, psize);
4037 }
4038
4039 check_top_chunk(m, m->top);
4040 }
4041
4042 /* -------------------------- System allocation -------------------------- */
4043
4044 /* Get memory from system using MORECORE or MMAP */
sys_alloc(mstate m,size_t nb)4045 static void* sys_alloc(mstate m, size_t nb) {
4046 char* tbase = CMFAIL;
4047 size_t tsize = 0;
4048 flag_t mmap_flag = 0;
4049 size_t asize; /* allocation size */
4050
4051 ensure_initialization();
4052
4053 /* Directly map large chunks, but only if already initialized */
4054 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4055 void* mem = mmap_alloc(m, nb);
4056 if (mem != 0)
4057 return mem;
4058 }
4059
4060 asize = granularity_align(nb + SYS_ALLOC_PADDING);
4061 if (asize <= nb)
4062 return 0; /* wraparound */
4063 if (m->footprint_limit != 0) {
4064 size_t fp = m->footprint + asize;
4065 if (fp <= m->footprint || fp > m->footprint_limit)
4066 return 0;
4067 }
4068
4069 /*
4070 Try getting memory in any of three ways (in most-preferred to
4071 least-preferred order):
4072 1. A call to MORECORE that can normally contiguously extend memory.
4073 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4074 or main space is mmapped or a previous contiguous call failed)
4075 2. A call to MMAP new space (disabled if not HAVE_MMAP).
4076 Note that under the default settings, if MORECORE is unable to
4077 fulfill a request, and HAVE_MMAP is true, then mmap is
4078 used as a noncontiguous system allocator. This is a useful backup
4079 strategy for systems with holes in address spaces -- in this case
4080 sbrk cannot contiguously expand the heap, but mmap may be able to
4081 find space.
4082 3. A call to MORECORE that cannot usually contiguously extend memory.
4083 (disabled if not HAVE_MORECORE)
4084
4085 In all cases, we need to request enough bytes from system to ensure
4086 we can malloc nb bytes upon success, so pad with enough space for
4087 top_foot, plus alignment-pad to make sure we don't lose bytes if
4088 not on boundary, and round this up to a granularity unit.
4089 */
4090
4091 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
4092 char* br = CMFAIL;
4093 size_t ssize = asize; /* sbrk call size */
4094 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4095 ACQUIRE_MALLOC_GLOBAL_LOCK();
4096
4097 if (ss == 0) { /* First time through or recovery */
4098 char* base = (char*)CALL_MORECORE(0);
4099 if (base != CMFAIL) {
4100 size_t fp;
4101 /* Adjust to end on a page boundary */
4102 if (!is_page_aligned(base))
4103 ssize += (page_align((size_t)base) - (size_t)base);
4104 fp = m->footprint + ssize; /* recheck limits */
4105 if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
4106 (m->footprint_limit == 0 ||
4107 (fp > m->footprint && fp <= m->footprint_limit)) &&
4108 (br = (char*)(CALL_MORECORE(ssize))) == base) {
4109 tbase = base;
4110 tsize = ssize;
4111 }
4112 }
4113 }
4114 else {
4115 /* Subtract out existing available top space from MORECORE request. */
4116 ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4117 /* Use mem here only if it did continuously extend old space */
4118 if (ssize < HALF_MAX_SIZE_T &&
4119 (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
4120 tbase = br;
4121 tsize = ssize;
4122 }
4123 }
4124
4125 if (tbase == CMFAIL) { /* Cope with partial failure */
4126 if (br != CMFAIL) { /* Try to use/extend the space we did get */
4127 if (ssize < HALF_MAX_SIZE_T &&
4128 ssize < nb + SYS_ALLOC_PADDING) {
4129 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
4130 if (esize < HALF_MAX_SIZE_T) {
4131 char* end = (char*)CALL_MORECORE(esize);
4132 if (end != CMFAIL)
4133 ssize += esize;
4134 else { /* Can't use; try to release */
4135 (void) CALL_MORECORE(-ssize);
4136 br = CMFAIL;
4137 }
4138 }
4139 }
4140 }
4141 if (br != CMFAIL) { /* Use the space we did get */
4142 tbase = br;
4143 tsize = ssize;
4144 }
4145 else
4146 disable_contiguous(m); /* Don't try contiguous path in the future */
4147 }
4148
4149 RELEASE_MALLOC_GLOBAL_LOCK();
4150 }
4151
4152 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4153 char* mp = (char*)(CALL_MMAP(asize));
4154 if (mp != CMFAIL) {
4155 tbase = mp;
4156 tsize = asize;
4157 mmap_flag = USE_MMAP_BIT;
4158 }
4159 }
4160
4161 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4162 if (asize < HALF_MAX_SIZE_T) {
4163 char* br = CMFAIL;
4164 char* end = CMFAIL;
4165 ACQUIRE_MALLOC_GLOBAL_LOCK();
4166 br = (char*)(CALL_MORECORE(asize));
4167 end = (char*)(CALL_MORECORE(0));
4168 RELEASE_MALLOC_GLOBAL_LOCK();
4169 if (br != CMFAIL && end != CMFAIL && br < end) {
4170 size_t ssize = end - br;
4171 if (ssize > nb + TOP_FOOT_SIZE) {
4172 tbase = br;
4173 tsize = ssize;
4174 }
4175 }
4176 }
4177 }
4178
4179 if (tbase != CMFAIL) {
4180
4181 if ((m->footprint += tsize) > m->max_footprint)
4182 m->max_footprint = m->footprint;
4183
4184 if (!is_initialized(m)) { /* first-time initialization */
4185 if (m->least_addr == 0 || tbase < m->least_addr)
4186 m->least_addr = tbase;
4187 m->seg.base = tbase;
4188 m->seg.size = tsize;
4189 m->seg.sflags = mmap_flag;
4190 m->magic = mparams.magic;
4191 m->release_checks = MAX_RELEASE_CHECK_RATE;
4192 init_bins(m);
4193 #if !ONLY_MSPACES
4194 if (is_global(m))
4195 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4196 else
4197 #endif
4198 {
4199 /* Offset top by embedded malloc_state */
4200 mchunkptr mn = next_chunk(mem2chunk(m));
4201 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4202 }
4203 }
4204
4205 else {
4206 /* Try to merge with an existing segment */
4207 msegmentptr sp = &m->seg;
4208 /* Only consider most recent segment if traversal suppressed */
4209 while (sp != 0 && tbase != sp->base + sp->size)
4210 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4211 if (sp != 0 &&
4212 !is_extern_segment(sp) &&
4213 (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4214 segment_holds(sp, m->top)) { /* append */
4215 sp->size += tsize;
4216 init_top(m, m->top, m->topsize + tsize);
4217 }
4218 else {
4219 if (tbase < m->least_addr)
4220 m->least_addr = tbase;
4221 sp = &m->seg;
4222 while (sp != 0 && sp->base != tbase + tsize)
4223 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4224 if (sp != 0 &&
4225 !is_extern_segment(sp) &&
4226 (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4227 char* oldbase = sp->base;
4228 sp->base = tbase;
4229 sp->size += tsize;
4230 return prepend_alloc(m, tbase, oldbase, nb);
4231 }
4232 else
4233 add_segment(m, tbase, tsize, mmap_flag);
4234 }
4235 }
4236
4237 if (nb < m->topsize) { /* Allocate from new or extended top space */
4238 size_t rsize = m->topsize -= nb;
4239 mchunkptr p = m->top;
4240 mchunkptr r = m->top = chunk_plus_offset(p, nb);
4241 r->head = rsize | PINUSE_BIT;
4242 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4243 check_top_chunk(m, m->top);
4244 check_malloced_chunk(m, chunk2mem(p), nb);
4245 return chunk2mem(p);
4246 }
4247 }
4248
4249 MALLOC_FAILURE_ACTION;
4250 return 0;
4251 }
4252
4253 /* ----------------------- system deallocation -------------------------- */
4254
4255 /* Unmap and unlink any mmapped segments that don't contain used chunks */
release_unused_segments(mstate m)4256 static size_t release_unused_segments(mstate m) {
4257 size_t released = 0;
4258 int nsegs = 0;
4259 msegmentptr pred = &m->seg;
4260 msegmentptr sp = pred->next;
4261 while (sp != 0) {
4262 char* base = sp->base;
4263 size_t size = sp->size;
4264 msegmentptr next = sp->next;
4265 ++nsegs;
4266 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4267 mchunkptr p = align_as_chunk(base);
4268 size_t psize = chunksize(p);
4269 /* Can unmap if first chunk holds entire segment and not pinned */
4270 if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4271 tchunkptr tp = (tchunkptr)p;
4272 assert(segment_holds(sp, (char*)sp));
4273 if (p == m->dv) {
4274 m->dv = 0;
4275 m->dvsize = 0;
4276 }
4277 else {
4278 unlink_large_chunk(m, tp);
4279 }
4280 if (CALL_MUNMAP(base, size) == 0) {
4281 released += size;
4282 m->footprint -= size;
4283 /* unlink obsoleted record */
4284 sp = pred;
4285 sp->next = next;
4286 }
4287 else { /* back out if cannot unmap */
4288 insert_large_chunk(m, tp, psize);
4289 }
4290 }
4291 }
4292 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4293 break;
4294 pred = sp;
4295 sp = next;
4296 }
4297 /* Reset check counter */
4298 m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
4299 (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
4300 return released;
4301 }
4302
sys_trim(mstate m,size_t pad)4303 static int sys_trim(mstate m, size_t pad) {
4304 size_t released = 0;
4305 ensure_initialization();
4306 if (pad < MAX_REQUEST && is_initialized(m)) {
4307 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4308
4309 if (m->topsize > pad) {
4310 /* Shrink top space in granularity-size units, keeping at least one */
4311 size_t unit = mparams.granularity;
4312 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4313 SIZE_T_ONE) * unit;
4314 msegmentptr sp = segment_holding(m, (char*)m->top);
4315
4316 if (!is_extern_segment(sp)) {
4317 if (is_mmapped_segment(sp)) {
4318 if (HAVE_MMAP &&
4319 sp->size >= extra &&
4320 !has_segment_link(m, sp)) { /* can't shrink if pinned */
4321 size_t newsize = sp->size - extra;
4322 (void)newsize; /* placate people compiling -Wunused-variable */
4323 /* Prefer mremap, fall back to munmap */
4324 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4325 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4326 released = extra;
4327 }
4328 }
4329 }
4330 else if (HAVE_MORECORE) {
4331 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4332 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4333 ACQUIRE_MALLOC_GLOBAL_LOCK();
4334 {
4335 /* Make sure end of memory is where we last set it. */
4336 char* old_br = (char*)(CALL_MORECORE(0));
4337 if (old_br == sp->base + sp->size) {
4338 char* rel_br = (char*)(CALL_MORECORE(-extra));
4339 char* new_br = (char*)(CALL_MORECORE(0));
4340 if (rel_br != CMFAIL && new_br < old_br)
4341 released = old_br - new_br;
4342 }
4343 }
4344 RELEASE_MALLOC_GLOBAL_LOCK();
4345 }
4346 }
4347
4348 if (released != 0) {
4349 sp->size -= released;
4350 m->footprint -= released;
4351 init_top(m, m->top, m->topsize - released);
4352 check_top_chunk(m, m->top);
4353 }
4354 }
4355
4356 /* Unmap any unused mmapped segments */
4357 if (HAVE_MMAP)
4358 released += release_unused_segments(m);
4359
4360 /* On failure, disable autotrim to avoid repeated failed future calls */
4361 if (released == 0 && m->topsize > m->trim_check)
4362 m->trim_check = MAX_SIZE_T;
4363 }
4364
4365 return (released != 0)? 1 : 0;
4366 }
4367
4368 /* Consolidate and bin a chunk. Differs from exported versions
4369 of free mainly in that the chunk need not be marked as inuse.
4370 */
dispose_chunk(mstate m,mchunkptr p,size_t psize)4371 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
4372 mchunkptr next = chunk_plus_offset(p, psize);
4373 if (!pinuse(p)) {
4374 mchunkptr prev;
4375 size_t prevsize = p->prev_foot;
4376 if (is_mmapped(p)) {
4377 psize += prevsize + MMAP_FOOT_PAD;
4378 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4379 m->footprint -= psize;
4380 return;
4381 }
4382 prev = chunk_minus_offset(p, prevsize);
4383 psize += prevsize;
4384 p = prev;
4385 if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4386 if (p != m->dv) {
4387 unlink_chunk(m, p, prevsize);
4388 }
4389 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4390 m->dvsize = psize;
4391 set_free_with_pinuse(p, psize, next);
4392 return;
4393 }
4394 }
4395 else {
4396 CORRUPTION_ERROR_ACTION(m);
4397 return;
4398 }
4399 }
4400 if (RTCHECK(ok_address(m, next))) {
4401 if (!cinuse(next)) { /* consolidate forward */
4402 if (next == m->top) {
4403 size_t tsize = m->topsize += psize;
4404 m->top = p;
4405 p->head = tsize | PINUSE_BIT;
4406 if (p == m->dv) {
4407 m->dv = 0;
4408 m->dvsize = 0;
4409 }
4410 return;
4411 }
4412 else if (next == m->dv) {
4413 size_t dsize = m->dvsize += psize;
4414 m->dv = p;
4415 set_size_and_pinuse_of_free_chunk(p, dsize);
4416 return;
4417 }
4418 else {
4419 size_t nsize = chunksize(next);
4420 psize += nsize;
4421 unlink_chunk(m, next, nsize);
4422 set_size_and_pinuse_of_free_chunk(p, psize);
4423 if (p == m->dv) {
4424 m->dvsize = psize;
4425 return;
4426 }
4427 }
4428 }
4429 else {
4430 set_free_with_pinuse(p, psize, next);
4431 }
4432 insert_chunk(m, p, psize);
4433 }
4434 else {
4435 CORRUPTION_ERROR_ACTION(m);
4436 }
4437 }
4438
4439 /* ---------------------------- malloc --------------------------- */
4440
4441 /* allocate a large request from the best fitting chunk in a treebin */
tmalloc_large(mstate m,size_t nb)4442 static void* tmalloc_large(mstate m, size_t nb) {
4443 tchunkptr v = 0;
4444 size_t rsize = -nb; /* Unsigned negation */
4445 tchunkptr t;
4446 bindex_t idx;
4447 compute_tree_index(nb, idx);
4448 if ((t = *treebin_at(m, idx)) != 0) {
4449 /* Traverse tree for this bin looking for node with size == nb */
4450 size_t sizebits = nb << leftshift_for_tree_index(idx);
4451 tchunkptr rst = 0; /* The deepest untaken right subtree */
4452 for (;;) {
4453 tchunkptr rt;
4454 size_t trem = chunksize(t) - nb;
4455 if (trem < rsize) {
4456 v = t;
4457 if ((rsize = trem) == 0)
4458 break;
4459 }
4460 rt = t->child[1];
4461 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4462 if (rt != 0 && rt != t)
4463 rst = rt;
4464 if (t == 0) {
4465 t = rst; /* set t to least subtree holding sizes > nb */
4466 break;
4467 }
4468 sizebits <<= 1;
4469 }
4470 }
4471 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4472 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4473 if (leftbits != 0) {
4474 bindex_t i;
4475 binmap_t leastbit = least_bit(leftbits);
4476 compute_bit2idx(leastbit, i);
4477 t = *treebin_at(m, i);
4478 }
4479 }
4480
4481 while (t != 0) { /* find smallest of tree or subtree */
4482 size_t trem = chunksize(t) - nb;
4483 if (trem < rsize) {
4484 rsize = trem;
4485 v = t;
4486 }
4487 t = leftmost_child(t);
4488 }
4489
4490 /* If dv is a better fit, return 0 so malloc will use it */
4491 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4492 if (RTCHECK(ok_address(m, v))) { /* split */
4493 mchunkptr r = chunk_plus_offset(v, nb);
4494 assert(chunksize(v) == rsize + nb);
4495 if (RTCHECK(ok_next(v, r))) {
4496 unlink_large_chunk(m, v);
4497 if (rsize < MIN_CHUNK_SIZE)
4498 set_inuse_and_pinuse(m, v, (rsize + nb));
4499 else {
4500 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4501 set_size_and_pinuse_of_free_chunk(r, rsize);
4502 insert_chunk(m, r, rsize);
4503 }
4504 return chunk2mem(v);
4505 }
4506 }
4507 CORRUPTION_ERROR_ACTION(m);
4508 }
4509 return 0;
4510 }
4511
4512 /* allocate a small request from the best fitting chunk in a treebin */
tmalloc_small(mstate m,size_t nb)4513 static void* tmalloc_small(mstate m, size_t nb) {
4514 tchunkptr t, v;
4515 size_t rsize;
4516 bindex_t i;
4517 binmap_t leastbit = least_bit(m->treemap);
4518 compute_bit2idx(leastbit, i);
4519 v = t = *treebin_at(m, i);
4520 rsize = chunksize(t) - nb;
4521
4522 while ((t = leftmost_child(t)) != 0) {
4523 size_t trem = chunksize(t) - nb;
4524 if (trem < rsize) {
4525 rsize = trem;
4526 v = t;
4527 }
4528 }
4529
4530 if (RTCHECK(ok_address(m, v))) {
4531 mchunkptr r = chunk_plus_offset(v, nb);
4532 assert(chunksize(v) == rsize + nb);
4533 if (RTCHECK(ok_next(v, r))) {
4534 unlink_large_chunk(m, v);
4535 if (rsize < MIN_CHUNK_SIZE)
4536 set_inuse_and_pinuse(m, v, (rsize + nb));
4537 else {
4538 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4539 set_size_and_pinuse_of_free_chunk(r, rsize);
4540 replace_dv(m, r, rsize);
4541 }
4542 return chunk2mem(v);
4543 }
4544 }
4545
4546 CORRUPTION_ERROR_ACTION(m);
4547 return 0;
4548 }
4549
4550 #if !ONLY_MSPACES
4551
dlmalloc(size_t bytes)4552 void* dlmalloc(size_t bytes) {
4553 /*
4554 Basic algorithm:
4555 If a small request (< 256 bytes minus per-chunk overhead):
4556 1. If one exists, use a remainderless chunk in associated smallbin.
4557 (Remainderless means that there are too few excess bytes to
4558 represent as a chunk.)
4559 2. If it is big enough, use the dv chunk, which is normally the
4560 chunk adjacent to the one used for the most recent small request.
4561 3. If one exists, split the smallest available chunk in a bin,
4562 saving remainder in dv.
4563 4. If it is big enough, use the top chunk.
4564 5. If available, get memory from system and use it
4565 Otherwise, for a large request:
4566 1. Find the smallest available binned chunk that fits, and use it
4567 if it is better fitting than dv chunk, splitting if necessary.
4568 2. If better fitting than any binned chunk, use the dv chunk.
4569 3. If it is big enough, use the top chunk.
4570 4. If request size >= mmap threshold, try to directly mmap this chunk.
4571 5. If available, get memory from system and use it
4572
4573 The ugly goto's here ensure that postaction occurs along all paths.
4574 */
4575
4576 #if USE_LOCKS
4577 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4578 #endif
4579
4580 if (!PREACTION(gm)) {
4581 void* mem;
4582 size_t nb;
4583 if (bytes <= MAX_SMALL_REQUEST) {
4584 bindex_t idx;
4585 binmap_t smallbits;
4586 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4587 idx = small_index(nb);
4588 smallbits = gm->smallmap >> idx;
4589
4590 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4591 mchunkptr b, p;
4592 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4593 b = smallbin_at(gm, idx);
4594 p = b->fd;
4595 assert(chunksize(p) == small_index2size(idx));
4596 unlink_first_small_chunk(gm, b, p, idx);
4597 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4598 mem = chunk2mem(p);
4599 check_malloced_chunk(gm, mem, nb);
4600 goto postaction;
4601 }
4602
4603 else if (nb > gm->dvsize) {
4604 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4605 mchunkptr b, p, r;
4606 size_t rsize;
4607 bindex_t i;
4608 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4609 binmap_t leastbit = least_bit(leftbits);
4610 compute_bit2idx(leastbit, i);
4611 b = smallbin_at(gm, i);
4612 p = b->fd;
4613 assert(chunksize(p) == small_index2size(i));
4614 unlink_first_small_chunk(gm, b, p, i);
4615 rsize = small_index2size(i) - nb;
4616 /* Fit here cannot be remainderless if 4byte sizes */
4617 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4618 set_inuse_and_pinuse(gm, p, small_index2size(i));
4619 else {
4620 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4621 r = chunk_plus_offset(p, nb);
4622 set_size_and_pinuse_of_free_chunk(r, rsize);
4623 replace_dv(gm, r, rsize);
4624 }
4625 mem = chunk2mem(p);
4626 check_malloced_chunk(gm, mem, nb);
4627 goto postaction;
4628 }
4629
4630 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4631 check_malloced_chunk(gm, mem, nb);
4632 goto postaction;
4633 }
4634 }
4635 }
4636 else if (bytes >= MAX_REQUEST)
4637 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4638 else {
4639 nb = pad_request(bytes);
4640 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4641 check_malloced_chunk(gm, mem, nb);
4642 goto postaction;
4643 }
4644 }
4645
4646 if (nb <= gm->dvsize) {
4647 size_t rsize = gm->dvsize - nb;
4648 mchunkptr p = gm->dv;
4649 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4650 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4651 gm->dvsize = rsize;
4652 set_size_and_pinuse_of_free_chunk(r, rsize);
4653 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4654 }
4655 else { /* exhaust dv */
4656 size_t dvs = gm->dvsize;
4657 gm->dvsize = 0;
4658 gm->dv = 0;
4659 set_inuse_and_pinuse(gm, p, dvs);
4660 }
4661 mem = chunk2mem(p);
4662 check_malloced_chunk(gm, mem, nb);
4663 goto postaction;
4664 }
4665
4666 else if (nb < gm->topsize) { /* Split top */
4667 size_t rsize = gm->topsize -= nb;
4668 mchunkptr p = gm->top;
4669 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4670 r->head = rsize | PINUSE_BIT;
4671 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4672 mem = chunk2mem(p);
4673 check_top_chunk(gm, gm->top);
4674 check_malloced_chunk(gm, mem, nb);
4675 goto postaction;
4676 }
4677
4678 mem = sys_alloc(gm, nb);
4679
4680 postaction:
4681 POSTACTION(gm);
4682 return mem;
4683 }
4684
4685 return 0;
4686 }
4687
4688 /* ---------------------------- free --------------------------- */
4689
dlfree(void * mem)4690 void dlfree(void* mem) {
4691 /*
4692 Consolidate freed chunks with preceeding or succeeding bordering
4693 free chunks, if they exist, and then place in a bin. Intermixed
4694 with special cases for top, dv, mmapped chunks, and usage errors.
4695 */
4696
4697 if (mem != 0) {
4698 mchunkptr p = mem2chunk(mem);
4699 #if FOOTERS
4700 mstate fm = get_mstate_for(p);
4701 if (!ok_magic(fm)) {
4702 USAGE_ERROR_ACTION(fm, p);
4703 return;
4704 }
4705 #else /* FOOTERS */
4706 #define fm gm
4707 #endif /* FOOTERS */
4708 if (!PREACTION(fm)) {
4709 check_inuse_chunk(fm, p);
4710 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4711 size_t psize = chunksize(p);
4712 mchunkptr next = chunk_plus_offset(p, psize);
4713 if (!pinuse(p)) {
4714 size_t prevsize = p->prev_foot;
4715 if (is_mmapped(p)) {
4716 psize += prevsize + MMAP_FOOT_PAD;
4717 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4718 fm->footprint -= psize;
4719 goto postaction;
4720 }
4721 else {
4722 mchunkptr prev = chunk_minus_offset(p, prevsize);
4723 psize += prevsize;
4724 p = prev;
4725 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4726 if (p != fm->dv) {
4727 unlink_chunk(fm, p, prevsize);
4728 }
4729 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4730 fm->dvsize = psize;
4731 set_free_with_pinuse(p, psize, next);
4732 goto postaction;
4733 }
4734 }
4735 else
4736 goto erroraction;
4737 }
4738 }
4739
4740 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4741 if (!cinuse(next)) { /* consolidate forward */
4742 if (next == fm->top) {
4743 size_t tsize = fm->topsize += psize;
4744 fm->top = p;
4745 p->head = tsize | PINUSE_BIT;
4746 if (p == fm->dv) {
4747 fm->dv = 0;
4748 fm->dvsize = 0;
4749 }
4750 if (should_trim(fm, tsize))
4751 sys_trim(fm, 0);
4752 goto postaction;
4753 }
4754 else if (next == fm->dv) {
4755 size_t dsize = fm->dvsize += psize;
4756 fm->dv = p;
4757 set_size_and_pinuse_of_free_chunk(p, dsize);
4758 goto postaction;
4759 }
4760 else {
4761 size_t nsize = chunksize(next);
4762 psize += nsize;
4763 unlink_chunk(fm, next, nsize);
4764 set_size_and_pinuse_of_free_chunk(p, psize);
4765 if (p == fm->dv) {
4766 fm->dvsize = psize;
4767 goto postaction;
4768 }
4769 }
4770 }
4771 else
4772 set_free_with_pinuse(p, psize, next);
4773
4774 if (is_small(psize)) {
4775 insert_small_chunk(fm, p, psize);
4776 check_free_chunk(fm, p);
4777 }
4778 else {
4779 tchunkptr tp = (tchunkptr)p;
4780 insert_large_chunk(fm, tp, psize);
4781 check_free_chunk(fm, p);
4782 if (--fm->release_checks == 0)
4783 release_unused_segments(fm);
4784 }
4785 goto postaction;
4786 }
4787 }
4788 erroraction:
4789 USAGE_ERROR_ACTION(fm, p);
4790 postaction:
4791 POSTACTION(fm);
4792 }
4793 }
4794 #if !FOOTERS
4795 #undef fm
4796 #endif /* FOOTERS */
4797 }
4798
dlcalloc(size_t n_elements,size_t elem_size)4799 void* dlcalloc(size_t n_elements, size_t elem_size) {
4800 void* mem;
4801 size_t req = 0;
4802 if (n_elements != 0) {
4803 req = n_elements * elem_size;
4804 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4805 (req / n_elements != elem_size))
4806 req = MAX_SIZE_T; /* force downstream failure on overflow */
4807 }
4808 mem = dlmalloc(req);
4809 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4810 memset(mem, 0, req);
4811 return mem;
4812 }
4813
4814 #endif /* !ONLY_MSPACES */
4815
4816 /* ------------ Internal support for realloc, memalign, etc -------------- */
4817
4818 /* Try to realloc; only in-place unless can_move true */
try_realloc_chunk(mstate m,mchunkptr p,size_t nb,int can_move)4819 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
4820 int can_move) {
4821 mchunkptr newp = 0;
4822 size_t oldsize = chunksize(p);
4823 mchunkptr next = chunk_plus_offset(p, oldsize);
4824 if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4825 ok_next(p, next) && ok_pinuse(next))) {
4826 if (is_mmapped(p)) {
4827 newp = mmap_resize(m, p, nb, can_move);
4828 }
4829 else if (oldsize >= nb) { /* already big enough */
4830 size_t rsize = oldsize - nb;
4831 if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
4832 mchunkptr r = chunk_plus_offset(p, nb);
4833 set_inuse(m, p, nb);
4834 set_inuse(m, r, rsize);
4835 dispose_chunk(m, r, rsize);
4836 }
4837 newp = p;
4838 }
4839 else if (next == m->top) { /* extend into top */
4840 if (oldsize + m->topsize > nb) {
4841 size_t newsize = oldsize + m->topsize;
4842 size_t newtopsize = newsize - nb;
4843 mchunkptr newtop = chunk_plus_offset(p, nb);
4844 set_inuse(m, p, nb);
4845 newtop->head = newtopsize |PINUSE_BIT;
4846 m->top = newtop;
4847 m->topsize = newtopsize;
4848 newp = p;
4849 }
4850 }
4851 else if (next == m->dv) { /* extend into dv */
4852 size_t dvs = m->dvsize;
4853 if (oldsize + dvs >= nb) {
4854 size_t dsize = oldsize + dvs - nb;
4855 if (dsize >= MIN_CHUNK_SIZE) {
4856 mchunkptr r = chunk_plus_offset(p, nb);
4857 mchunkptr n = chunk_plus_offset(r, dsize);
4858 set_inuse(m, p, nb);
4859 set_size_and_pinuse_of_free_chunk(r, dsize);
4860 clear_pinuse(n);
4861 m->dvsize = dsize;
4862 m->dv = r;
4863 }
4864 else { /* exhaust dv */
4865 size_t newsize = oldsize + dvs;
4866 set_inuse(m, p, newsize);
4867 m->dvsize = 0;
4868 m->dv = 0;
4869 }
4870 newp = p;
4871 }
4872 }
4873 else if (!cinuse(next)) { /* extend into next free chunk */
4874 size_t nextsize = chunksize(next);
4875 if (oldsize + nextsize >= nb) {
4876 size_t rsize = oldsize + nextsize - nb;
4877 unlink_chunk(m, next, nextsize);
4878 if (rsize < MIN_CHUNK_SIZE) {
4879 size_t newsize = oldsize + nextsize;
4880 set_inuse(m, p, newsize);
4881 }
4882 else {
4883 mchunkptr r = chunk_plus_offset(p, nb);
4884 set_inuse(m, p, nb);
4885 set_inuse(m, r, rsize);
4886 dispose_chunk(m, r, rsize);
4887 }
4888 newp = p;
4889 }
4890 }
4891 }
4892 else {
4893 USAGE_ERROR_ACTION(m, chunk2mem(p));
4894 }
4895 return newp;
4896 }
4897
internal_memalign(mstate m,size_t alignment,size_t bytes)4898 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4899 void* mem = 0;
4900 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4901 alignment = MIN_CHUNK_SIZE;
4902 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4903 size_t a = MALLOC_ALIGNMENT << 1;
4904 while (a < alignment) a <<= 1;
4905 alignment = a;
4906 }
4907 if (bytes >= MAX_REQUEST - alignment) {
4908 if (m != 0) { /* Test isn't needed but avoids compiler warning */
4909 MALLOC_FAILURE_ACTION;
4910 }
4911 }
4912 else {
4913 size_t nb = request2size(bytes);
4914 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4915 mem = internal_malloc(m, req);
4916 if (mem != 0) {
4917 mchunkptr p = mem2chunk(mem);
4918 if (PREACTION(m))
4919 return 0;
4920 if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
4921 /*
4922 Find an aligned spot inside chunk. Since we need to give
4923 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4924 the first calculation places us at a spot with less than
4925 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4926 We've allocated enough total room so that this is always
4927 possible.
4928 */
4929 char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
4930 SIZE_T_ONE)) &
4931 -alignment));
4932 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4933 br : br+alignment;
4934 mchunkptr newp = (mchunkptr)pos;
4935 size_t leadsize = pos - (char*)(p);
4936 size_t newsize = chunksize(p) - leadsize;
4937
4938 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4939 newp->prev_foot = p->prev_foot + leadsize;
4940 newp->head = newsize;
4941 }
4942 else { /* Otherwise, give back leader, use the rest */
4943 set_inuse(m, newp, newsize);
4944 set_inuse(m, p, leadsize);
4945 dispose_chunk(m, p, leadsize);
4946 }
4947 p = newp;
4948 }
4949
4950 /* Give back spare room at the end */
4951 if (!is_mmapped(p)) {
4952 size_t size = chunksize(p);
4953 if (size > nb + MIN_CHUNK_SIZE) {
4954 size_t remainder_size = size - nb;
4955 mchunkptr remainder = chunk_plus_offset(p, nb);
4956 set_inuse(m, p, nb);
4957 set_inuse(m, remainder, remainder_size);
4958 dispose_chunk(m, remainder, remainder_size);
4959 }
4960 }
4961
4962 mem = chunk2mem(p);
4963 assert (chunksize(p) >= nb);
4964 assert(((size_t)mem & (alignment - 1)) == 0);
4965 check_inuse_chunk(m, p);
4966 POSTACTION(m);
4967 }
4968 }
4969 return mem;
4970 }
4971
4972 /*
4973 Common support for independent_X routines, handling
4974 all of the combinations that can result.
4975 The opts arg has:
4976 bit 0 set if all elements are same size (using sizes[0])
4977 bit 1 set if elements should be zeroed
4978 */
ialloc(mstate m,size_t n_elements,size_t * sizes,int opts,void * chunks[])4979 static void** ialloc(mstate m,
4980 size_t n_elements,
4981 size_t* sizes,
4982 int opts,
4983 void* chunks[]) {
4984
4985 size_t element_size; /* chunksize of each element, if all same */
4986 size_t contents_size; /* total size of elements */
4987 size_t array_size; /* request size of pointer array */
4988 void* mem; /* malloced aggregate space */
4989 mchunkptr p; /* corresponding chunk */
4990 size_t remainder_size; /* remaining bytes while splitting */
4991 void** marray; /* either "chunks" or malloced ptr array */
4992 mchunkptr array_chunk; /* chunk for malloced ptr array */
4993 flag_t was_enabled; /* to disable mmap */
4994 size_t size;
4995 size_t i;
4996
4997 ensure_initialization();
4998 /* compute array length, if needed */
4999 if (chunks != 0) {
5000 if (n_elements == 0)
5001 return chunks; /* nothing to do */
5002 marray = chunks;
5003 array_size = 0;
5004 }
5005 else {
5006 /* if empty req, must still return chunk representing empty array */
5007 if (n_elements == 0)
5008 return (void**)internal_malloc(m, 0);
5009 marray = 0;
5010 array_size = request2size(n_elements * (sizeof(void*)));
5011 }
5012
5013 /* compute total element size */
5014 if (opts & 0x1) { /* all-same-size */
5015 element_size = request2size(*sizes);
5016 contents_size = n_elements * element_size;
5017 }
5018 else { /* add up all the sizes */
5019 element_size = 0;
5020 contents_size = 0;
5021 for (i = 0; i != n_elements; ++i)
5022 contents_size += request2size(sizes[i]);
5023 }
5024
5025 size = contents_size + array_size;
5026
5027 /*
5028 Allocate the aggregate chunk. First disable direct-mmapping so
5029 malloc won't use it, since we would not be able to later
5030 free/realloc space internal to a segregated mmap region.
5031 */
5032 was_enabled = use_mmap(m);
5033 disable_mmap(m);
5034 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5035 if (was_enabled)
5036 enable_mmap(m);
5037 if (mem == 0)
5038 return 0;
5039
5040 if (PREACTION(m)) return 0;
5041 p = mem2chunk(mem);
5042 remainder_size = chunksize(p);
5043
5044 assert(!is_mmapped(p));
5045
5046 if (opts & 0x2) { /* optionally clear the elements */
5047 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5048 }
5049
5050 /* If not provided, allocate the pointer array as final part of chunk */
5051 if (marray == 0) {
5052 size_t array_chunk_size;
5053 array_chunk = chunk_plus_offset(p, contents_size);
5054 array_chunk_size = remainder_size - contents_size;
5055 marray = (void**) (chunk2mem(array_chunk));
5056 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5057 remainder_size = contents_size;
5058 }
5059
5060 /* split out elements */
5061 for (i = 0; ; ++i) {
5062 marray[i] = chunk2mem(p);
5063 if (i != n_elements-1) {
5064 if (element_size != 0)
5065 size = element_size;
5066 else
5067 size = request2size(sizes[i]);
5068 remainder_size -= size;
5069 set_size_and_pinuse_of_inuse_chunk(m, p, size);
5070 p = chunk_plus_offset(p, size);
5071 }
5072 else { /* the final element absorbs any overallocation slop */
5073 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5074 break;
5075 }
5076 }
5077
5078 #if DEBUG
5079 if (marray != chunks) {
5080 /* final element must have exactly exhausted chunk */
5081 if (element_size != 0) {
5082 assert(remainder_size == element_size);
5083 }
5084 else {
5085 assert(remainder_size == request2size(sizes[i]));
5086 }
5087 check_inuse_chunk(m, mem2chunk(marray));
5088 }
5089 for (i = 0; i != n_elements; ++i)
5090 check_inuse_chunk(m, mem2chunk(marray[i]));
5091
5092 #endif /* DEBUG */
5093
5094 POSTACTION(m);
5095 return marray;
5096 }
5097
5098 /* Try to free all pointers in the given array.
5099 Note: this could be made faster, by delaying consolidation,
5100 at the price of disabling some user integrity checks, We
5101 still optimize some consolidations by combining adjacent
5102 chunks before freeing, which will occur often if allocated
5103 with ialloc or the array is sorted.
5104 */
internal_bulk_free(mstate m,void * array[],size_t nelem)5105 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
5106 size_t unfreed = 0;
5107 if (!PREACTION(m)) {
5108 void** a;
5109 void** fence = &(array[nelem]);
5110 for (a = array; a != fence; ++a) {
5111 void* mem = *a;
5112 if (mem != 0) {
5113 mchunkptr p = mem2chunk(mem);
5114 size_t psize = chunksize(p);
5115 #if FOOTERS
5116 if (get_mstate_for(p) != m) {
5117 ++unfreed;
5118 continue;
5119 }
5120 #endif
5121 check_inuse_chunk(m, p);
5122 *a = 0;
5123 if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5124 void ** b = a + 1; /* try to merge with next chunk */
5125 mchunkptr next = next_chunk(p);
5126 if (b != fence && *b == chunk2mem(next)) {
5127 size_t newsize = chunksize(next) + psize;
5128 set_inuse(m, p, newsize);
5129 *b = chunk2mem(p);
5130 }
5131 else
5132 dispose_chunk(m, p, psize);
5133 }
5134 else {
5135 CORRUPTION_ERROR_ACTION(m);
5136 break;
5137 }
5138 }
5139 }
5140 if (should_trim(m, m->topsize))
5141 sys_trim(m, 0);
5142 POSTACTION(m);
5143 }
5144 return unfreed;
5145 }
5146
5147 /* Traversal */
5148 #if MALLOC_INSPECT_ALL
internal_inspect_all(mstate m,void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5149 static void internal_inspect_all(mstate m,
5150 void(*handler)(void *start,
5151 void *end,
5152 size_t used_bytes,
5153 void* callback_arg),
5154 void* arg) {
5155 if (is_initialized(m)) {
5156 mchunkptr top = m->top;
5157 msegmentptr s;
5158 for (s = &m->seg; s != 0; s = s->next) {
5159 mchunkptr q = align_as_chunk(s->base);
5160 while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5161 mchunkptr next = next_chunk(q);
5162 size_t sz = chunksize(q);
5163 size_t used;
5164 void* start;
5165 if (is_inuse(q)) {
5166 used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5167 start = chunk2mem(q);
5168 }
5169 else {
5170 used = 0;
5171 if (is_small(sz)) { /* offset by possible bookkeeping */
5172 start = (void*)((char*)q + sizeof(struct malloc_chunk));
5173 }
5174 else {
5175 start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));
5176 }
5177 }
5178 if (start < (void*)next) /* skip if all space is bookkeeping */
5179 handler(start, next, used, arg);
5180 if (q == top)
5181 break;
5182 q = next;
5183 }
5184 }
5185 }
5186 }
5187 #endif /* MALLOC_INSPECT_ALL */
5188
5189 /* ------------------ Exported realloc, memalign, etc -------------------- */
5190
5191 #if !ONLY_MSPACES
5192
dlrealloc(void * oldmem,size_t bytes)5193 void* dlrealloc(void* oldmem, size_t bytes) {
5194 void* mem = 0;
5195 if (oldmem == 0) {
5196 mem = dlmalloc(bytes);
5197 }
5198 else if (bytes >= MAX_REQUEST) {
5199 MALLOC_FAILURE_ACTION;
5200 }
5201 #ifdef REALLOC_ZERO_BYTES_FREES
5202 else if (bytes == 0) {
5203 dlfree(oldmem);
5204 }
5205 #endif /* REALLOC_ZERO_BYTES_FREES */
5206 else {
5207 size_t nb = request2size(bytes);
5208 mchunkptr oldp = mem2chunk(oldmem);
5209 #if ! FOOTERS
5210 mstate m = gm;
5211 #else /* FOOTERS */
5212 mstate m = get_mstate_for(oldp);
5213 if (!ok_magic(m)) {
5214 USAGE_ERROR_ACTION(m, oldmem);
5215 return 0;
5216 }
5217 #endif /* FOOTERS */
5218 if (!PREACTION(m)) {
5219 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5220 POSTACTION(m);
5221 if (newp != 0) {
5222 check_inuse_chunk(m, newp);
5223 mem = chunk2mem(newp);
5224 }
5225 else {
5226 mem = internal_malloc(m, bytes);
5227 if (mem != 0) {
5228 size_t oc = chunksize(oldp) - overhead_for(oldp);
5229 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5230 internal_free(m, oldmem);
5231 }
5232 }
5233 }
5234 }
5235 return mem;
5236 }
5237
dlrealloc_in_place(void * oldmem,size_t bytes)5238 void* dlrealloc_in_place(void* oldmem, size_t bytes) {
5239 void* mem = 0;
5240 if (oldmem != 0) {
5241 if (bytes >= MAX_REQUEST) {
5242 MALLOC_FAILURE_ACTION;
5243 }
5244 else {
5245 size_t nb = request2size(bytes);
5246 mchunkptr oldp = mem2chunk(oldmem);
5247 #if ! FOOTERS
5248 mstate m = gm;
5249 #else /* FOOTERS */
5250 mstate m = get_mstate_for(oldp);
5251 if (!ok_magic(m)) {
5252 USAGE_ERROR_ACTION(m, oldmem);
5253 return 0;
5254 }
5255 #endif /* FOOTERS */
5256 if (!PREACTION(m)) {
5257 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5258 POSTACTION(m);
5259 if (newp == oldp) {
5260 check_inuse_chunk(m, newp);
5261 mem = oldmem;
5262 }
5263 }
5264 }
5265 }
5266 return mem;
5267 }
5268
dlmemalign(size_t alignment,size_t bytes)5269 void* dlmemalign(size_t alignment, size_t bytes) {
5270 if (alignment <= MALLOC_ALIGNMENT) {
5271 return dlmalloc(bytes);
5272 }
5273 return internal_memalign(gm, alignment, bytes);
5274 }
5275
dlposix_memalign(void ** pp,size_t alignment,size_t bytes)5276 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
5277 void* mem = 0;
5278 if (alignment == MALLOC_ALIGNMENT)
5279 mem = dlmalloc(bytes);
5280 else {
5281 size_t d = alignment / sizeof(void*);
5282 size_t r = alignment % sizeof(void*);
5283 if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5284 return EINVAL;
5285 else if (bytes <= MAX_REQUEST - alignment) {
5286 if (alignment < MIN_CHUNK_SIZE)
5287 alignment = MIN_CHUNK_SIZE;
5288 mem = internal_memalign(gm, alignment, bytes);
5289 }
5290 }
5291 if (mem == 0)
5292 return ENOMEM;
5293 else {
5294 *pp = mem;
5295 return 0;
5296 }
5297 }
5298
dlvalloc(size_t bytes)5299 void* dlvalloc(size_t bytes) {
5300 size_t pagesz;
5301 ensure_initialization();
5302 pagesz = mparams.page_size;
5303 return dlmemalign(pagesz, bytes);
5304 }
5305
dlpvalloc(size_t bytes)5306 void* dlpvalloc(size_t bytes) {
5307 size_t pagesz;
5308 ensure_initialization();
5309 pagesz = mparams.page_size;
5310 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
5311 }
5312
dlindependent_calloc(size_t n_elements,size_t elem_size,void * chunks[])5313 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5314 void* chunks[]) {
5315 size_t sz = elem_size; /* serves as 1-element array */
5316 return ialloc(gm, n_elements, &sz, 3, chunks);
5317 }
5318
dlindependent_comalloc(size_t n_elements,size_t sizes[],void * chunks[])5319 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5320 void* chunks[]) {
5321 return ialloc(gm, n_elements, sizes, 0, chunks);
5322 }
5323
dlbulk_free(void * array[],size_t nelem)5324 size_t dlbulk_free(void* array[], size_t nelem) {
5325 return internal_bulk_free(gm, array, nelem);
5326 }
5327
5328 #if MALLOC_INSPECT_ALL
dlmalloc_inspect_all(void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5329 void dlmalloc_inspect_all(void(*handler)(void *start,
5330 void *end,
5331 size_t used_bytes,
5332 void* callback_arg),
5333 void* arg) {
5334 ensure_initialization();
5335 if (!PREACTION(gm)) {
5336 internal_inspect_all(gm, handler, arg);
5337 POSTACTION(gm);
5338 }
5339 }
5340 #endif /* MALLOC_INSPECT_ALL */
5341
dlmalloc_trim(size_t pad)5342 int dlmalloc_trim(size_t pad) {
5343 int result = 0;
5344 ensure_initialization();
5345 if (!PREACTION(gm)) {
5346 result = sys_trim(gm, pad);
5347 POSTACTION(gm);
5348 }
5349 return result;
5350 }
5351
dlmalloc_footprint(void)5352 size_t dlmalloc_footprint(void) {
5353 return gm->footprint;
5354 }
5355
dlmalloc_max_footprint(void)5356 size_t dlmalloc_max_footprint(void) {
5357 return gm->max_footprint;
5358 }
5359
dlmalloc_footprint_limit(void)5360 size_t dlmalloc_footprint_limit(void) {
5361 size_t maf = gm->footprint_limit;
5362 return maf == 0 ? MAX_SIZE_T : maf;
5363 }
5364
dlmalloc_set_footprint_limit(size_t bytes)5365 size_t dlmalloc_set_footprint_limit(size_t bytes) {
5366 size_t result; /* invert sense of 0 */
5367 if (bytes == 0)
5368 result = granularity_align(1); /* Use minimal size */
5369 if (bytes == MAX_SIZE_T)
5370 result = 0; /* disable */
5371 else
5372 result = granularity_align(bytes);
5373 return gm->footprint_limit = result;
5374 }
5375
5376 #if !NO_MALLINFO
dlmallinfo(void)5377 struct mallinfo dlmallinfo(void) {
5378 return internal_mallinfo(gm);
5379 }
5380 #endif /* NO_MALLINFO */
5381
5382 #if !NO_MALLOC_STATS
dlmalloc_stats()5383 void dlmalloc_stats() {
5384 internal_malloc_stats(gm);
5385 }
5386 #endif /* NO_MALLOC_STATS */
5387
dlmallopt(int param_number,int value)5388 int dlmallopt(int param_number, int value) {
5389 return change_mparam(param_number, value);
5390 }
5391
dlmalloc_usable_size(void * mem)5392 size_t dlmalloc_usable_size(void* mem) {
5393 if (mem != 0) {
5394 mchunkptr p = mem2chunk(mem);
5395 if (is_inuse(p))
5396 return chunksize(p) - overhead_for(p);
5397 }
5398 return 0;
5399 }
5400
5401 #endif /* !ONLY_MSPACES */
5402
5403 /* ----------------------------- user mspaces ---------------------------- */
5404
5405 #if MSPACES
5406
init_user_mstate(char * tbase,size_t tsize)5407 static mstate init_user_mstate(char* tbase, size_t tsize) {
5408 size_t msize = pad_request(sizeof(struct malloc_state));
5409 mchunkptr mn;
5410 mchunkptr msp = align_as_chunk(tbase);
5411 mstate m = (mstate)(chunk2mem(msp));
5412 memset(m, 0, msize);
5413 (void)INITIAL_LOCK(&m->mutex);
5414 msp->head = (msize|INUSE_BITS);
5415 m->seg.base = m->least_addr = tbase;
5416 m->seg.size = m->footprint = m->max_footprint = tsize;
5417 m->magic = mparams.magic;
5418 m->release_checks = MAX_RELEASE_CHECK_RATE;
5419 m->mflags = mparams.default_mflags;
5420 m->extp = 0;
5421 m->exts = 0;
5422 disable_contiguous(m);
5423 init_bins(m);
5424 mn = next_chunk(mem2chunk(m));
5425 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5426 check_top_chunk(m, m->top);
5427 return m;
5428 }
5429
create_mspace(size_t capacity,int locked)5430 mspace create_mspace(size_t capacity, int locked) {
5431 mstate m = 0;
5432 size_t msize;
5433 ensure_initialization();
5434 msize = pad_request(sizeof(struct malloc_state));
5435 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5436 size_t rs = ((capacity == 0)? mparams.granularity :
5437 (capacity + TOP_FOOT_SIZE + msize));
5438 size_t tsize = granularity_align(rs);
5439 char* tbase = (char*)(CALL_MMAP(tsize));
5440 if (tbase != CMFAIL) {
5441 m = init_user_mstate(tbase, tsize);
5442 m->seg.sflags = USE_MMAP_BIT;
5443 set_lock(m, locked);
5444 }
5445 }
5446 return (mspace)m;
5447 }
5448
5449
create_mspace_with_base(void * base,size_t capacity,int locked)5450 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5451 mstate m = 0;
5452 size_t msize;
5453 ensure_initialization();
5454 msize = pad_request(sizeof(struct malloc_state));
5455 if (capacity > msize + TOP_FOOT_SIZE &&
5456 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5457 m = init_user_mstate((char*)base, capacity);
5458 m->seg.sflags = EXTERN_BIT;
5459 set_lock(m, locked);
5460 }
5461
5462 return (mspace)m;
5463 }
5464
mspace_track_large_chunks(mspace msp,int enable)5465 int mspace_track_large_chunks(mspace msp, int enable) {
5466 int ret = 0;
5467 mstate ms = (mstate)msp;
5468 if (!PREACTION(ms)) {
5469 if (!use_mmap(ms)) {
5470 ret = 1;
5471 }
5472 if (!enable) {
5473 enable_mmap(ms);
5474 } else {
5475 disable_mmap(ms);
5476 }
5477 POSTACTION(ms);
5478 }
5479 return ret;
5480 }
5481
destroy_mspace(mspace msp)5482 size_t destroy_mspace(mspace msp) {
5483 size_t freed = 0;
5484 mstate ms = (mstate)msp;
5485 if (ok_magic(ms)) {
5486 msegmentptr sp = &ms->seg;
5487 (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5488 while (sp != 0) {
5489 char* base = sp->base;
5490 size_t size = sp->size;
5491 flag_t flag = sp->sflags;
5492 (void)base; /* placate people compiling -Wunused-variable */
5493 sp = sp->next;
5494 if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5495 CALL_MUNMAP(base, size) == 0)
5496 freed += size;
5497 }
5498 }
5499 else {
5500 USAGE_ERROR_ACTION(ms,ms);
5501 }
5502 return freed;
5503 }
5504
5505 /*
5506 mspace versions of routines are near-clones of the global
5507 versions. This is not so nice but better than the alternatives.
5508 */
5509
mspace_malloc(mspace msp,size_t bytes)5510 void* mspace_malloc(mspace msp, size_t bytes) {
5511 mstate ms = (mstate)msp;
5512 if (!ok_magic(ms)) {
5513 USAGE_ERROR_ACTION(ms,ms);
5514 return 0;
5515 }
5516 if (!PREACTION(ms)) {
5517 void* mem;
5518 size_t nb;
5519 if (bytes <= MAX_SMALL_REQUEST) {
5520 bindex_t idx;
5521 binmap_t smallbits;
5522 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5523 idx = small_index(nb);
5524 smallbits = ms->smallmap >> idx;
5525
5526 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5527 mchunkptr b, p;
5528 idx += ~smallbits & 1; /* Uses next bin if idx empty */
5529 b = smallbin_at(ms, idx);
5530 p = b->fd;
5531 assert(chunksize(p) == small_index2size(idx));
5532 unlink_first_small_chunk(ms, b, p, idx);
5533 set_inuse_and_pinuse(ms, p, small_index2size(idx));
5534 mem = chunk2mem(p);
5535 check_malloced_chunk(ms, mem, nb);
5536 goto postaction;
5537 }
5538
5539 else if (nb > ms->dvsize) {
5540 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5541 mchunkptr b, p, r;
5542 size_t rsize;
5543 bindex_t i;
5544 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5545 binmap_t leastbit = least_bit(leftbits);
5546 compute_bit2idx(leastbit, i);
5547 b = smallbin_at(ms, i);
5548 p = b->fd;
5549 assert(chunksize(p) == small_index2size(i));
5550 unlink_first_small_chunk(ms, b, p, i);
5551 rsize = small_index2size(i) - nb;
5552 /* Fit here cannot be remainderless if 4byte sizes */
5553 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5554 set_inuse_and_pinuse(ms, p, small_index2size(i));
5555 else {
5556 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5557 r = chunk_plus_offset(p, nb);
5558 set_size_and_pinuse_of_free_chunk(r, rsize);
5559 replace_dv(ms, r, rsize);
5560 }
5561 mem = chunk2mem(p);
5562 check_malloced_chunk(ms, mem, nb);
5563 goto postaction;
5564 }
5565
5566 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5567 check_malloced_chunk(ms, mem, nb);
5568 goto postaction;
5569 }
5570 }
5571 }
5572 else if (bytes >= MAX_REQUEST)
5573 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5574 else {
5575 nb = pad_request(bytes);
5576 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5577 check_malloced_chunk(ms, mem, nb);
5578 goto postaction;
5579 }
5580 }
5581
5582 if (nb <= ms->dvsize) {
5583 size_t rsize = ms->dvsize - nb;
5584 mchunkptr p = ms->dv;
5585 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5586 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5587 ms->dvsize = rsize;
5588 set_size_and_pinuse_of_free_chunk(r, rsize);
5589 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5590 }
5591 else { /* exhaust dv */
5592 size_t dvs = ms->dvsize;
5593 ms->dvsize = 0;
5594 ms->dv = 0;
5595 set_inuse_and_pinuse(ms, p, dvs);
5596 }
5597 mem = chunk2mem(p);
5598 check_malloced_chunk(ms, mem, nb);
5599 goto postaction;
5600 }
5601
5602 else if (nb < ms->topsize) { /* Split top */
5603 size_t rsize = ms->topsize -= nb;
5604 mchunkptr p = ms->top;
5605 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5606 r->head = rsize | PINUSE_BIT;
5607 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5608 mem = chunk2mem(p);
5609 check_top_chunk(ms, ms->top);
5610 check_malloced_chunk(ms, mem, nb);
5611 goto postaction;
5612 }
5613
5614 mem = sys_alloc(ms, nb);
5615
5616 postaction:
5617 POSTACTION(ms);
5618 return mem;
5619 }
5620
5621 return 0;
5622 }
5623
mspace_free(mspace msp,void * mem)5624 void mspace_free(mspace msp, void* mem) {
5625 if (mem != 0) {
5626 mchunkptr p = mem2chunk(mem);
5627 #if FOOTERS
5628 mstate fm = get_mstate_for(p);
5629 (void)msp; /* placate people compiling -Wunused */
5630 #else /* FOOTERS */
5631 mstate fm = (mstate)msp;
5632 #endif /* FOOTERS */
5633 if (!ok_magic(fm)) {
5634 USAGE_ERROR_ACTION(fm, p);
5635 return;
5636 }
5637 if (!PREACTION(fm)) {
5638 check_inuse_chunk(fm, p);
5639 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5640 size_t psize = chunksize(p);
5641 mchunkptr next = chunk_plus_offset(p, psize);
5642 if (!pinuse(p)) {
5643 size_t prevsize = p->prev_foot;
5644 if (is_mmapped(p)) {
5645 psize += prevsize + MMAP_FOOT_PAD;
5646 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5647 fm->footprint -= psize;
5648 goto postaction;
5649 }
5650 else {
5651 mchunkptr prev = chunk_minus_offset(p, prevsize);
5652 psize += prevsize;
5653 p = prev;
5654 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5655 if (p != fm->dv) {
5656 unlink_chunk(fm, p, prevsize);
5657 }
5658 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5659 fm->dvsize = psize;
5660 set_free_with_pinuse(p, psize, next);
5661 goto postaction;
5662 }
5663 }
5664 else
5665 goto erroraction;
5666 }
5667 }
5668
5669 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5670 if (!cinuse(next)) { /* consolidate forward */
5671 if (next == fm->top) {
5672 size_t tsize = fm->topsize += psize;
5673 fm->top = p;
5674 p->head = tsize | PINUSE_BIT;
5675 if (p == fm->dv) {
5676 fm->dv = 0;
5677 fm->dvsize = 0;
5678 }
5679 if (should_trim(fm, tsize))
5680 sys_trim(fm, 0);
5681 goto postaction;
5682 }
5683 else if (next == fm->dv) {
5684 size_t dsize = fm->dvsize += psize;
5685 fm->dv = p;
5686 set_size_and_pinuse_of_free_chunk(p, dsize);
5687 goto postaction;
5688 }
5689 else {
5690 size_t nsize = chunksize(next);
5691 psize += nsize;
5692 unlink_chunk(fm, next, nsize);
5693 set_size_and_pinuse_of_free_chunk(p, psize);
5694 if (p == fm->dv) {
5695 fm->dvsize = psize;
5696 goto postaction;
5697 }
5698 }
5699 }
5700 else
5701 set_free_with_pinuse(p, psize, next);
5702
5703 if (is_small(psize)) {
5704 insert_small_chunk(fm, p, psize);
5705 check_free_chunk(fm, p);
5706 }
5707 else {
5708 tchunkptr tp = (tchunkptr)p;
5709 insert_large_chunk(fm, tp, psize);
5710 check_free_chunk(fm, p);
5711 if (--fm->release_checks == 0)
5712 release_unused_segments(fm);
5713 }
5714 goto postaction;
5715 }
5716 }
5717 erroraction:
5718 USAGE_ERROR_ACTION(fm, p);
5719 postaction:
5720 POSTACTION(fm);
5721 }
5722 }
5723 }
5724
mspace_calloc(mspace msp,size_t n_elements,size_t elem_size)5725 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5726 void* mem;
5727 size_t req = 0;
5728 mstate ms = (mstate)msp;
5729 if (!ok_magic(ms)) {
5730 USAGE_ERROR_ACTION(ms,ms);
5731 return 0;
5732 }
5733 if (n_elements != 0) {
5734 req = n_elements * elem_size;
5735 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5736 (req / n_elements != elem_size))
5737 req = MAX_SIZE_T; /* force downstream failure on overflow */
5738 }
5739 mem = internal_malloc(ms, req);
5740 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5741 memset(mem, 0, req);
5742 return mem;
5743 }
5744
mspace_realloc(mspace msp,void * oldmem,size_t bytes)5745 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5746 void* mem = 0;
5747 if (oldmem == 0) {
5748 mem = mspace_malloc(msp, bytes);
5749 }
5750 else if (bytes >= MAX_REQUEST) {
5751 MALLOC_FAILURE_ACTION;
5752 }
5753 #ifdef REALLOC_ZERO_BYTES_FREES
5754 else if (bytes == 0) {
5755 mspace_free(msp, oldmem);
5756 }
5757 #endif /* REALLOC_ZERO_BYTES_FREES */
5758 else {
5759 size_t nb = request2size(bytes);
5760 mchunkptr oldp = mem2chunk(oldmem);
5761 #if ! FOOTERS
5762 mstate m = (mstate)msp;
5763 #else /* FOOTERS */
5764 mstate m = get_mstate_for(oldp);
5765 if (!ok_magic(m)) {
5766 USAGE_ERROR_ACTION(m, oldmem);
5767 return 0;
5768 }
5769 #endif /* FOOTERS */
5770 if (!PREACTION(m)) {
5771 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5772 POSTACTION(m);
5773 if (newp != 0) {
5774 check_inuse_chunk(m, newp);
5775 mem = chunk2mem(newp);
5776 }
5777 else {
5778 mem = mspace_malloc(m, bytes);
5779 if (mem != 0) {
5780 size_t oc = chunksize(oldp) - overhead_for(oldp);
5781 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5782 mspace_free(m, oldmem);
5783 }
5784 }
5785 }
5786 }
5787 return mem;
5788 }
5789
mspace_realloc_in_place(mspace msp,void * oldmem,size_t bytes)5790 void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
5791 void* mem = 0;
5792 if (oldmem != 0) {
5793 if (bytes >= MAX_REQUEST) {
5794 MALLOC_FAILURE_ACTION;
5795 }
5796 else {
5797 size_t nb = request2size(bytes);
5798 mchunkptr oldp = mem2chunk(oldmem);
5799 #if ! FOOTERS
5800 mstate m = (mstate)msp;
5801 #else /* FOOTERS */
5802 mstate m = get_mstate_for(oldp);
5803 (void)msp; /* placate people compiling -Wunused */
5804 if (!ok_magic(m)) {
5805 USAGE_ERROR_ACTION(m, oldmem);
5806 return 0;
5807 }
5808 #endif /* FOOTERS */
5809 if (!PREACTION(m)) {
5810 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5811 POSTACTION(m);
5812 if (newp == oldp) {
5813 check_inuse_chunk(m, newp);
5814 mem = oldmem;
5815 }
5816 }
5817 }
5818 }
5819 return mem;
5820 }
5821
mspace_memalign(mspace msp,size_t alignment,size_t bytes)5822 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5823 mstate ms = (mstate)msp;
5824 if (!ok_magic(ms)) {
5825 USAGE_ERROR_ACTION(ms,ms);
5826 return 0;
5827 }
5828 if (alignment <= MALLOC_ALIGNMENT)
5829 return mspace_malloc(msp, bytes);
5830 return internal_memalign(ms, alignment, bytes);
5831 }
5832
mspace_independent_calloc(mspace msp,size_t n_elements,size_t elem_size,void * chunks[])5833 void** mspace_independent_calloc(mspace msp, size_t n_elements,
5834 size_t elem_size, void* chunks[]) {
5835 size_t sz = elem_size; /* serves as 1-element array */
5836 mstate ms = (mstate)msp;
5837 if (!ok_magic(ms)) {
5838 USAGE_ERROR_ACTION(ms,ms);
5839 return 0;
5840 }
5841 return ialloc(ms, n_elements, &sz, 3, chunks);
5842 }
5843
mspace_independent_comalloc(mspace msp,size_t n_elements,size_t sizes[],void * chunks[])5844 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5845 size_t sizes[], void* chunks[]) {
5846 mstate ms = (mstate)msp;
5847 if (!ok_magic(ms)) {
5848 USAGE_ERROR_ACTION(ms,ms);
5849 return 0;
5850 }
5851 return ialloc(ms, n_elements, sizes, 0, chunks);
5852 }
5853
mspace_bulk_free(mspace msp,void * array[],size_t nelem)5854 size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
5855 return internal_bulk_free((mstate)msp, array, nelem);
5856 }
5857
5858 #if MALLOC_INSPECT_ALL
mspace_inspect_all(mspace msp,void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5859 void mspace_inspect_all(mspace msp,
5860 void(*handler)(void *start,
5861 void *end,
5862 size_t used_bytes,
5863 void* callback_arg),
5864 void* arg) {
5865 mstate ms = (mstate)msp;
5866 if (ok_magic(ms)) {
5867 if (!PREACTION(ms)) {
5868 internal_inspect_all(ms, handler, arg);
5869 POSTACTION(ms);
5870 }
5871 }
5872 else {
5873 USAGE_ERROR_ACTION(ms,ms);
5874 }
5875 }
5876 #endif /* MALLOC_INSPECT_ALL */
5877
mspace_trim(mspace msp,size_t pad)5878 int mspace_trim(mspace msp, size_t pad) {
5879 int result = 0;
5880 mstate ms = (mstate)msp;
5881 if (ok_magic(ms)) {
5882 if (!PREACTION(ms)) {
5883 result = sys_trim(ms, pad);
5884 POSTACTION(ms);
5885 }
5886 }
5887 else {
5888 USAGE_ERROR_ACTION(ms,ms);
5889 }
5890 return result;
5891 }
5892
5893 #if !NO_MALLOC_STATS
mspace_malloc_stats(mspace msp)5894 void mspace_malloc_stats(mspace msp) {
5895 mstate ms = (mstate)msp;
5896 if (ok_magic(ms)) {
5897 internal_malloc_stats(ms);
5898 }
5899 else {
5900 USAGE_ERROR_ACTION(ms,ms);
5901 }
5902 }
5903 #endif /* NO_MALLOC_STATS */
5904
mspace_footprint(mspace msp)5905 size_t mspace_footprint(mspace msp) {
5906 size_t result = 0;
5907 mstate ms = (mstate)msp;
5908 if (ok_magic(ms)) {
5909 result = ms->footprint;
5910 }
5911 else {
5912 USAGE_ERROR_ACTION(ms,ms);
5913 }
5914 return result;
5915 }
5916
mspace_max_footprint(mspace msp)5917 size_t mspace_max_footprint(mspace msp) {
5918 size_t result = 0;
5919 mstate ms = (mstate)msp;
5920 if (ok_magic(ms)) {
5921 result = ms->max_footprint;
5922 }
5923 else {
5924 USAGE_ERROR_ACTION(ms,ms);
5925 }
5926 return result;
5927 }
5928
mspace_footprint_limit(mspace msp)5929 size_t mspace_footprint_limit(mspace msp) {
5930 size_t result = 0;
5931 mstate ms = (mstate)msp;
5932 if (ok_magic(ms)) {
5933 size_t maf = ms->footprint_limit;
5934 result = (maf == 0) ? MAX_SIZE_T : maf;
5935 }
5936 else {
5937 USAGE_ERROR_ACTION(ms,ms);
5938 }
5939 return result;
5940 }
5941
mspace_set_footprint_limit(mspace msp,size_t bytes)5942 size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
5943 size_t result = 0;
5944 mstate ms = (mstate)msp;
5945 if (ok_magic(ms)) {
5946 if (bytes == 0)
5947 result = granularity_align(1); /* Use minimal size */
5948 if (bytes == MAX_SIZE_T)
5949 result = 0; /* disable */
5950 else
5951 result = granularity_align(bytes);
5952 ms->footprint_limit = result;
5953 }
5954 else {
5955 USAGE_ERROR_ACTION(ms,ms);
5956 }
5957 return result;
5958 }
5959
5960 #if !NO_MALLINFO
mspace_mallinfo(mspace msp)5961 struct mallinfo mspace_mallinfo(mspace msp) {
5962 mstate ms = (mstate)msp;
5963 if (!ok_magic(ms)) {
5964 USAGE_ERROR_ACTION(ms,ms);
5965 }
5966 return internal_mallinfo(ms);
5967 }
5968 #endif /* NO_MALLINFO */
5969
mspace_usable_size(const void * mem)5970 size_t mspace_usable_size(const void* mem) {
5971 if (mem != 0) {
5972 mchunkptr p = mem2chunk(mem);
5973 if (is_inuse(p))
5974 return chunksize(p) - overhead_for(p);
5975 }
5976 return 0;
5977 }
5978
mspace_mallopt(int param_number,int value)5979 int mspace_mallopt(int param_number, int value) {
5980 return change_mparam(param_number, value);
5981 }
5982
5983 #endif /* MSPACES */
5984
5985
5986 /* -------------------- Alternative MORECORE functions ------------------- */
5987
5988 /*
5989 Guidelines for creating a custom version of MORECORE:
5990
5991 * For best performance, MORECORE should allocate in multiples of pagesize.
5992 * MORECORE may allocate more memory than requested. (Or even less,
5993 but this will usually result in a malloc failure.)
5994 * MORECORE must not allocate memory when given argument zero, but
5995 instead return one past the end address of memory from previous
5996 nonzero call.
5997 * For best performance, consecutive calls to MORECORE with positive
5998 arguments should return increasing addresses, indicating that
5999 space has been contiguously extended.
6000 * Even though consecutive calls to MORECORE need not return contiguous
6001 addresses, it must be OK for malloc'ed chunks to span multiple
6002 regions in those cases where they do happen to be contiguous.
6003 * MORECORE need not handle negative arguments -- it may instead
6004 just return MFAIL when given negative arguments.
6005 Negative arguments are always multiples of pagesize. MORECORE
6006 must not misinterpret negative args as large positive unsigned
6007 args. You can suppress all such calls from even occurring by defining
6008 MORECORE_CANNOT_TRIM,
6009
6010 As an example alternative MORECORE, here is a custom allocator
6011 kindly contributed for pre-OSX macOS. It uses virtually but not
6012 necessarily physically contiguous non-paged memory (locked in,
6013 present and won't get swapped out). You can use it by uncommenting
6014 this section, adding some #includes, and setting up the appropriate
6015 defines above:
6016
6017 #define MORECORE osMoreCore
6018
6019 There is also a shutdown routine that should somehow be called for
6020 cleanup upon program exit.
6021
6022 #define MAX_POOL_ENTRIES 100
6023 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
6024 static int next_os_pool;
6025 void *our_os_pools[MAX_POOL_ENTRIES];
6026
6027 void *osMoreCore(int size)
6028 {
6029 void *ptr = 0;
6030 static void *sbrk_top = 0;
6031
6032 if (size > 0)
6033 {
6034 if (size < MINIMUM_MORECORE_SIZE)
6035 size = MINIMUM_MORECORE_SIZE;
6036 if (CurrentExecutionLevel() == kTaskLevel)
6037 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6038 if (ptr == 0)
6039 {
6040 return (void *) MFAIL;
6041 }
6042 // save ptrs so they can be freed during cleanup
6043 our_os_pools[next_os_pool] = ptr;
6044 next_os_pool++;
6045 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6046 sbrk_top = (char *) ptr + size;
6047 return ptr;
6048 }
6049 else if (size < 0)
6050 {
6051 // we don't currently support shrink behavior
6052 return (void *) MFAIL;
6053 }
6054 else
6055 {
6056 return sbrk_top;
6057 }
6058 }
6059
6060 // cleanup any allocated memory pools
6061 // called as last thing before shutting down driver
6062
6063 void osCleanupMem(void)
6064 {
6065 void **ptr;
6066
6067 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6068 if (*ptr)
6069 {
6070 PoolDeallocate(*ptr);
6071 *ptr = 0;
6072 }
6073 }
6074
6075 */
6076
6077
6078 /* -----------------------------------------------------------------------
6079 History:
6080 v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
6081 * fix bad comparison in dlposix_memalign
6082 * don't reuse adjusted asize in sys_alloc
6083 * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6084 * reduce compiler warnings -- thanks to all who reported/suggested these
6085
6086 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6087 * Always perform unlink checks unless INSECURE
6088 * Add posix_memalign.
6089 * Improve realloc to expand in more cases; expose realloc_in_place.
6090 Thanks to Peter Buhr for the suggestion.
6091 * Add footprint_limit, inspect_all, bulk_free. Thanks
6092 to Barry Hayes and others for the suggestions.
6093 * Internal refactorings to avoid calls while holding locks
6094 * Use non-reentrant locks by default. Thanks to Roland McGrath
6095 for the suggestion.
6096 * Small fixes to mspace_destroy, reset_on_error.
6097 * Various configuration extensions/changes. Thanks
6098 to all who contributed these.
6099
6100 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6101 * Update Creative Commons URL
6102
6103 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6104 * Use zeros instead of prev foot for is_mmapped
6105 * Add mspace_track_large_chunks; thanks to Jean Brouwers
6106 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6107 * Fix insufficient sys_alloc padding when using 16byte alignment
6108 * Fix bad error check in mspace_footprint
6109 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6110 * Reentrant spin locks; thanks to Earl Chew and others
6111 * Win32 improvements; thanks to Niall Douglas and Earl Chew
6112 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6113 * Extension hook in malloc_state
6114 * Various small adjustments to reduce warnings on some compilers
6115 * Various configuration extensions/changes for more platforms. Thanks
6116 to all who contributed these.
6117
6118 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6119 * Add max_footprint functions
6120 * Ensure all appropriate literals are size_t
6121 * Fix conditional compilation problem for some #define settings
6122 * Avoid concatenating segments with the one provided
6123 in create_mspace_with_base
6124 * Rename some variables to avoid compiler shadowing warnings
6125 * Use explicit lock initialization.
6126 * Better handling of sbrk interference.
6127 * Simplify and fix segment insertion, trimming and mspace_destroy
6128 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6129 * Thanks especially to Dennis Flanagan for help on these.
6130
6131 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6132 * Fix memalign brace error.
6133
6134 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6135 * Fix improper #endif nesting in C++
6136 * Add explicit casts needed for C++
6137
6138 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6139 * Use trees for large bins
6140 * Support mspaces
6141 * Use segments to unify sbrk-based and mmap-based system allocation,
6142 removing need for emulation on most platforms without sbrk.
6143 * Default safety checks
6144 * Optional footer checks. Thanks to William Robertson for the idea.
6145 * Internal code refactoring
6146 * Incorporate suggestions and platform-specific changes.
6147 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6148 Aaron Bachmann, Emery Berger, and others.
6149 * Speed up non-fastbin processing enough to remove fastbins.
6150 * Remove useless cfree() to avoid conflicts with other apps.
6151 * Remove internal memcpy, memset. Compilers handle builtins better.
6152 * Remove some options that no one ever used and rename others.
6153
6154 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6155 * Fix malloc_state bitmap array misdeclaration
6156
6157 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6158 * Allow tuning of FIRST_SORTED_BIN_SIZE
6159 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6160 * Better detection and support for non-contiguousness of MORECORE.
6161 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6162 * Bypass most of malloc if no frees. Thanks To Emery Berger.
6163 * Fix freeing of old top non-contiguous chunk im sysmalloc.
6164 * Raised default trim and map thresholds to 256K.
6165 * Fix mmap-related #defines. Thanks to Lubos Lunak.
6166 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6167 * Branch-free bin calculation
6168 * Default trim and mmap thresholds now 256K.
6169
6170 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6171 * Introduce independent_comalloc and independent_calloc.
6172 Thanks to Michael Pachos for motivation and help.
6173 * Make optional .h file available
6174 * Allow > 2GB requests on 32bit systems.
6175 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
6176 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6177 and Anonymous.
6178 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6179 helping test this.)
6180 * memalign: check alignment arg
6181 * realloc: don't try to shift chunks backwards, since this
6182 leads to more fragmentation in some programs and doesn't
6183 seem to help in any others.
6184 * Collect all cases in malloc requiring system memory into sysmalloc
6185 * Use mmap as backup to sbrk
6186 * Place all internal state in malloc_state
6187 * Introduce fastbins (although similar to 2.5.1)
6188 * Many minor tunings and cosmetic improvements
6189 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6190 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6191 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
6192 * Include errno.h to support default failure action.
6193
6194 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6195 * return null for negative arguments
6196 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6197 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6198 (e.g. WIN32 platforms)
6199 * Cleanup header file inclusion for WIN32 platforms
6200 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6201 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6202 memory allocation routines
6203 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6204 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6205 usage of 'assert' in non-WIN32 code
6206 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6207 avoid infinite loop
6208 * Always call 'fREe()' rather than 'free()'
6209
6210 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6211 * Fixed ordering problem with boundary-stamping
6212
6213 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6214 * Added pvalloc, as recommended by H.J. Liu
6215 * Added 64bit pointer support mainly from Wolfram Gloger
6216 * Added anonymously donated WIN32 sbrk emulation
6217 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6218 * malloc_extend_top: fix mask error that caused wastage after
6219 foreign sbrks
6220 * Add linux mremap support code from HJ Liu
6221
6222 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6223 * Integrated most documentation with the code.
6224 * Add support for mmap, with help from
6225 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6226 * Use last_remainder in more cases.
6227 * Pack bins using idea from colin@nyx10.cs.du.edu
6228 * Use ordered bins instead of best-fit threshhold
6229 * Eliminate block-local decls to simplify tracing and debugging.
6230 * Support another case of realloc via move into top
6231 * Fix error occuring when initial sbrk_base not word-aligned.
6232 * Rely on page size for units instead of SBRK_UNIT to
6233 avoid surprises about sbrk alignment conventions.
6234 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6235 (raymond@es.ele.tue.nl) for the suggestion.
6236 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6237 * More precautions for cases where other routines call sbrk,
6238 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6239 * Added macros etc., allowing use in linux libc from
6240 H.J. Lu (hjl@gnu.ai.mit.edu)
6241 * Inverted this history list
6242
6243 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6244 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6245 * Removed all preallocation code since under current scheme
6246 the work required to undo bad preallocations exceeds
6247 the work saved in good cases for most test programs.
6248 * No longer use return list or unconsolidated bins since
6249 no scheme using them consistently outperforms those that don't
6250 given above changes.
6251 * Use best fit for very large chunks to prevent some worst-cases.
6252 * Added some support for debugging
6253
6254 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6255 * Removed footers when chunks are in use. Thanks to
6256 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
6257
6258 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6259 * Added malloc_trim, with help from Wolfram Gloger
6260 (wmglo@Dent.MED.Uni-Muenchen.DE).
6261
6262 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6263
6264 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6265 * realloc: try to expand in both directions
6266 * malloc: swap order of clean-bin strategy;
6267 * realloc: only conditionally expand backwards
6268 * Try not to scavenge used bins
6269 * Use bin counts as a guide to preallocation
6270 * Occasionally bin return list chunks in first scan
6271 * Add a few optimizations from colin@nyx10.cs.du.edu
6272
6273 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6274 * faster bin computation & slightly different binning
6275 * merged all consolidations to one part of malloc proper
6276 (eliminating old malloc_find_space & malloc_clean_bin)
6277 * Scan 2 returns chunks (not just 1)
6278 * Propagate failure in realloc if malloc returns 0
6279 * Add stuff to allow compilation on non-ANSI compilers
6280 from kpv@research.att.com
6281
6282 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6283 * removed potential for odd address access in prev_chunk
6284 * removed dependency on getpagesize.h
6285 * misc cosmetics and a bit more internal documentation
6286 * anticosmetics: mangled names in macros to evade debugger strangeness
6287 * tested on sparc, hp-700, dec-mips, rs6000
6288 with gcc & native cc (hp, dec only) allowing
6289 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6290
6291 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6292 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6293 structure of old version, but most details differ.)
6294
6295 */
6296