1 /* Malloc implementation for multiple threads without lock contention.
2 Copyright (C) 1996 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
5 and Doug Lea <dl@cs.oswego.edu>, 1996.
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Library General Public License as
9 published by the Free Software Foundation; either version 2 of the
10 License, or (at your option) any later version.
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Library General Public License for more details.
17 You should have received a copy of the GNU Library General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If not,
19 write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
20 Boston, MA 02111-1307, USA. */
22 /* VERSION 2.6.4-pt Wed Dec 4 00:35:54 MET 1996
24 This work is mainly derived from malloc-2.6.4 by Doug Lea
25 <dl@cs.oswego.edu>, which is available from:
27 ftp://g.oswego.edu/pub/misc/malloc.c
29 Most of the original comments are reproduced in the code below.
31 * Why use this malloc?
33 This is not the fastest, most space-conserving, most portable, or
34 most tunable malloc ever written. However it is among the fastest
35 while also being among the most space-conserving, portable and tunable.
36 Consistent balance across these factors results in a good general-purpose
37 allocator. For a high-level description, see
38 http://g.oswego.edu/dl/html/malloc.html
40 On many systems, the standard malloc implementation is by itself not
41 thread-safe, and therefore wrapped with a single global lock around
42 all malloc-related functions. In some applications, especially with
43 multiple available processors, this can lead to contention problems
44 and bad performance. This malloc version was designed with the goal
45 to avoid waiting for locks as much as possible. Statistics indicate
46 that this goal is achieved in many cases.
48 * Synopsis of public routines
50 (Much fuller descriptions are contained in the program documentation below.)
53 Initialize global configuration. When compiled for multiple threads,
54 this function must be called once before any other function in the
55 package. It is not required otherwise. It is called automatically
56 in the Linux/GNU C libray.
58 Return a pointer to a newly allocated chunk of at least n bytes, or null
59 if no space is available.
61 Release the chunk of memory pointed to by p, or no effect if p is null.
62 realloc(Void_t* p, size_t n);
63 Return a pointer to a chunk of size n that contains the same data
64 as does chunk p up to the minimum of (n, p's size) bytes, or null
65 if no space is available. The returned pointer may or may not be
66 the same as p. If p is null, equivalent to malloc. Unless the
67 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
68 size argument of zero (re)allocates a minimum-sized chunk.
69 memalign(size_t alignment, size_t n);
70 Return a pointer to a newly allocated chunk of n bytes, aligned
71 in accord with the alignment argument, which must be a power of
74 Equivalent to memalign(pagesize, n), where pagesize is the page
75 size of the system (or as near to this as can be figured out from
76 all the includes/defines below.)
78 Equivalent to valloc(minimum-page-that-holds(n)), that is,
79 round up n to nearest pagesize.
80 calloc(size_t unit, size_t quantity);
81 Returns a pointer to quantity * unit bytes, with all locations
84 Equivalent to free(p).
85 malloc_trim(size_t pad);
86 Release all but pad bytes of freed top-most memory back
87 to the system. Return 1 if successful, else 0.
88 malloc_usable_size(Void_t* p);
89 Report the number usable allocated bytes associated with allocated
90 chunk p. This may or may not report more bytes than were requested,
91 due to alignment and minimum size constraints.
93 Prints brief summary statistics on stderr.
95 Returns (by copy) a struct containing various summary statistics.
96 mallopt(int parameter_number, int parameter_value)
97 Changes one of the tunable parameters described below. Returns
98 1 if successful in changing the parameter, else 0.
103 8 byte alignment is currently hardwired into the design. This
104 seems to suffice for all current machines and C compilers.
106 Assumed pointer representation: 4 or 8 bytes
107 Code for 8-byte pointers is untested by me but has worked
108 reliably by Wolfram Gloger, who contributed most of the
109 changes supporting this.
111 Assumed size_t representation: 4 or 8 bytes
112 Note that size_t is allowed to be 4 bytes even if pointers are 8.
114 Minimum overhead per allocated chunk: 4 or 8 bytes
115 Each malloced chunk has a hidden overhead of 4 bytes holding size
116 and status information.
118 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
119 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
121 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
122 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
123 needed; 4 (8) for a trailing size field
124 and 8 (16) bytes for free list pointers. Thus, the minimum
125 allocatable size is 16/24/32 bytes.
127 Even a request for zero bytes (i.e., malloc(0)) returns a
128 pointer to something of the minimum allocatable size.
130 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
131 8-byte size_t: 2^63 - 16 bytes
133 It is assumed that (possibly signed) size_t bit values suffice to
134 represent chunk sizes. `Possibly signed' is due to the fact
135 that `size_t' may be defined on a system as either a signed or
136 an unsigned type. To be conservative, values that would appear
137 as negative numbers are avoided.
138 Requests for sizes with a negative sign bit will return a
141 Maximum overhead wastage per allocated chunk: normally 15 bytes
143 Alignnment demands, plus the minimum allocatable size restriction
144 make the normal worst-case wastage 15 bytes (i.e., up to 15
145 more bytes will be allocated than were requested in malloc), with
147 1. Because requests for zero bytes allocate non-zero space,
148 the worst case wastage for a request of zero bytes is 24 bytes.
149 2. For requests >= mmap_threshold that are serviced via
150 mmap(), the worst case wastage is 8 bytes plus the remainder
151 from a system page (the minimal mmap unit); typically 4096 bytes.
155 Here are some features that are NOT currently supported
157 * No user-definable hooks for callbacks and the like.
158 * No automated mechanism for fully checking that all accesses
159 to malloced memory stay within their bounds.
160 * No support for compaction.
162 * Synopsis of compile-time options:
164 People have reported using previous versions of this malloc on all
165 versions of Unix, sometimes by tweaking some of the defines
166 below. It has been tested most extensively on Solaris and
167 Linux. People have also reported adapting this malloc for use in
168 stand-alone embedded systems.
170 The implementation is in straight, hand-tuned ANSI C. Among other
171 consequences, it uses a lot of macros. Because of this, to be at
172 all usable, this code should be compiled using an optimizing compiler
173 (for example gcc -O2) that can simplify expressions and control
176 __STD_C (default: derived from C compiler defines)
177 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
178 a C compiler sufficiently close to ANSI to get away with it.
179 MALLOC_DEBUG (default: NOT defined)
180 Define to enable debugging. Adds fairly extensive assertion-based
181 checking to help track down memory errors, but noticeably slows down
183 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
184 Define this if you think that realloc(p, 0) should be equivalent
185 to free(p). Otherwise, since malloc returns a unique pointer for
186 malloc(0), so does realloc(p, 0).
187 HAVE_MEMCPY (default: defined)
188 Define if you are not otherwise using ANSI STD C, but still
189 have memcpy and memset in your C library and want to use them.
190 Otherwise, simple internal versions are supplied.
191 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
192 Define as 1 if you want the C library versions of memset and
193 memcpy called in realloc and calloc (otherwise macro versions are used).
194 At least on some platforms, the simple macro versions usually
195 outperform libc versions.
196 HAVE_MMAP (default: defined as 1)
197 Define to non-zero to optionally make malloc() use mmap() to
198 allocate very large blocks.
199 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
200 Define to non-zero to optionally make realloc() use mremap() to
201 reallocate very large blocks.
202 malloc_getpagesize (default: derived from system #includes)
203 Either a constant or routine call returning the system page size.
204 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
205 Optionally define if you are on a system with a /usr/include/malloc.h
206 that declares struct mallinfo. It is not at all necessary to
207 define this even if you do, but will ensure consistency.
208 INTERNAL_SIZE_T (default: size_t)
209 Define to a 32-bit type (probably `unsigned int') if you are on a
210 64-bit machine, yet do not want or need to allow malloc requests of
211 greater than 2^31 to be handled. This saves space, especially for
213 _LIBC (default: NOT defined)
214 Defined only when compiled as part of the Linux libc/glibc.
215 Also note that there is some odd internal name-mangling via defines
216 (for example, internally, `malloc' is named `mALLOc') needed
217 when compiling in this case. These look funny but don't otherwise
219 LACKS_UNISTD_H (default: undefined)
220 Define this if your system does not have a <unistd.h>.
221 MORECORE (default: sbrk)
222 The name of the routine to call to obtain more memory from the system.
223 MORECORE_FAILURE (default: -1)
224 The value returned upon failure of MORECORE.
225 MORECORE_CLEARS (default 1)
226 True (1) if the routine mapped to MORECORE zeroes out memory (which
228 DEFAULT_TRIM_THRESHOLD
230 DEFAULT_MMAP_THRESHOLD
232 Default values of tunable parameters (described in detail below)
233 controlling interaction with host system routines (sbrk, mmap, etc).
234 These values may also be changed dynamically via mallopt(). The
235 preset defaults are those that give best performance for typical
243 * Compile-time options for multiple threads:
245 USE_PTHREADS, USE_THR, USE_SPROC
246 Define one of these as 1 to select the thread interface:
247 POSIX threads, Solaris threads or SGI sproc's, respectively.
248 If none of these is defined as non-zero, you get a `normal'
249 malloc implementation which is not thread-safe. Support for
250 multiple threads requires HAVE_MMAP=1. As an exception, when
251 compiling for GNU libc, i.e. when _LIBC is defined, then none of
252 the USE_... symbols have to be defined.
256 When thread support is enabled, additional `heap's are created
257 with mmap calls. These are limited in size; HEAP_MIN_SIZE should
258 be a multiple of the page size, while HEAP_MAX_SIZE must be a power
259 of two for alignment reasons. HEAP_MAX_SIZE should be at least
260 twice as large as the mmap threshold.
262 When this is defined as non-zero, some statistics on mutex locking
273 #if defined (__STDC__)
280 #endif /*__cplusplus*/
293 #include <stddef.h> /* for size_t */
295 #include <sys/types.h>
298 /* Macros for handling mutexes and thread-specific data. This is
299 included early, because some thread-related header files (such as
300 pthread.h) should be included before any others. */
301 #include "thread-m.h"
307 #include <stdio.h> /* needed for malloc_stats */
318 Because freed chunks may be overwritten with link fields, this
319 malloc will often die when freed memory is overwritten by user
320 programs. This can be very effective (albeit in an annoying way)
321 in helping track down dangling pointers.
323 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
324 enabled that will catch more memory errors. You probably won't be
325 able to make much sense of the actual assertion errors, but they
326 should help you locate incorrectly overwritten memory. The
327 checking is fairly extensive, and will slow down execution
328 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
329 attempt to check every non-mmapped allocated and free chunk in the
330 course of computing the summmaries. (By nature, mmapped regions
331 cannot be checked very much automatically.)
333 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
334 this code. The assertions in the check routines spell out in more
335 detail the assumptions and invariants underlying the algorithms.
342 #define assert(x) ((void)0)
347 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
348 of chunk sizes. On a 64-bit machine, you can reduce malloc
349 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
350 at the expense of not being able to handle requests greater than
351 2^31. This limitation is hardly ever a concern; you are encouraged
352 to set this. However, the default version is the same as size_t.
355 #ifndef INTERNAL_SIZE_T
356 #define INTERNAL_SIZE_T size_t
360 REALLOC_ZERO_BYTES_FREES should be set if a call to
361 realloc with zero bytes should be the same as a call to free.
362 Some people think it should. Otherwise, since this malloc
363 returns a unique pointer for malloc(0), so does realloc(p, 0).
367 /* #define REALLOC_ZERO_BYTES_FREES */
371 HAVE_MEMCPY should be defined if you are not otherwise using
372 ANSI STD C, but still have memcpy and memset in your C library
373 and want to use them in calloc and realloc. Otherwise simple
374 macro versions are defined here.
376 USE_MEMCPY should be defined as 1 if you actually want to
377 have memset and memcpy called. People report that the macro
378 versions are often enough faster than libc versions on many
379 systems that it is better to use them.
393 #if (__STD_C || defined(HAVE_MEMCPY))
396 void* memset(void*, int, size_t);
397 void* memcpy(void*, const void*, size_t);
406 /* The following macros are only invoked with (2n+1)-multiples of
407 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
408 for fast inline execution when n is small. */
410 #define MALLOC_ZERO(charp, nbytes) \
412 INTERNAL_SIZE_T mzsz = (nbytes); \
413 if(mzsz <= 9*sizeof(mzsz)) { \
414 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
415 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
417 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
419 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
424 } else memset((charp), 0, mzsz); \
427 #define MALLOC_COPY(dest,src,nbytes) \
429 INTERNAL_SIZE_T mcsz = (nbytes); \
430 if(mcsz <= 9*sizeof(mcsz)) { \
431 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
432 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
433 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
434 *mcdst++ = *mcsrc++; \
435 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
436 *mcdst++ = *mcsrc++; \
437 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
438 *mcdst++ = *mcsrc++; }}} \
439 *mcdst++ = *mcsrc++; \
440 *mcdst++ = *mcsrc++; \
442 } else memcpy(dest, src, mcsz); \
445 #else /* !USE_MEMCPY */
447 /* Use Duff's device for good zeroing/copying performance. */
449 #define MALLOC_ZERO(charp, nbytes) \
451 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
452 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
453 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
455 case 0: for(;;) { *mzp++ = 0; \
456 case 7: *mzp++ = 0; \
457 case 6: *mzp++ = 0; \
458 case 5: *mzp++ = 0; \
459 case 4: *mzp++ = 0; \
460 case 3: *mzp++ = 0; \
461 case 2: *mzp++ = 0; \
462 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
466 #define MALLOC_COPY(dest,src,nbytes) \
468 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
469 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
470 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
471 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
473 case 0: for(;;) { *mcdst++ = *mcsrc++; \
474 case 7: *mcdst++ = *mcsrc++; \
475 case 6: *mcdst++ = *mcsrc++; \
476 case 5: *mcdst++ = *mcsrc++; \
477 case 4: *mcdst++ = *mcsrc++; \
478 case 3: *mcdst++ = *mcsrc++; \
479 case 2: *mcdst++ = *mcsrc++; \
480 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
488 Define HAVE_MMAP to optionally make malloc() use mmap() to
489 allocate very large blocks. These will be returned to the
490 operating system immediately after a free().
498 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
499 large blocks. This is currently only possible on Linux with
500 kernel versions newer than 1.3.77.
504 #define HAVE_MREMAP defined(__linux__)
511 #include <sys/mman.h>
513 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
514 #define MAP_ANONYMOUS MAP_ANON
517 #endif /* HAVE_MMAP */
520 Access to system page size. To the extent possible, this malloc
521 manages memory from the system in page-size units.
523 The following mechanics for getpagesize were adapted from
524 bsd/gnu getpagesize.h
527 #ifndef LACKS_UNISTD_H
531 #ifndef malloc_getpagesize
532 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
533 # ifndef _SC_PAGE_SIZE
534 # define _SC_PAGE_SIZE _SC_PAGESIZE
537 # ifdef _SC_PAGE_SIZE
538 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
540 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
541 extern size_t getpagesize();
542 # define malloc_getpagesize getpagesize()
544 # include <sys/param.h>
545 # ifdef EXEC_PAGESIZE
546 # define malloc_getpagesize EXEC_PAGESIZE
550 # define malloc_getpagesize NBPG
552 # define malloc_getpagesize (NBPG * CLSIZE)
556 # define malloc_getpagesize NBPC
559 # define malloc_getpagesize PAGESIZE
561 # define malloc_getpagesize (4096) /* just guess */
574 This version of malloc supports the standard SVID/XPG mallinfo
575 routine that returns a struct containing the same kind of
576 information you can get from malloc_stats. It should work on
577 any SVID/XPG compliant system that has a /usr/include/malloc.h
578 defining struct mallinfo. (If you'd like to install such a thing
579 yourself, cut out the preliminary declarations as described above
580 and below and save them in a malloc.h file. But there's no
581 compelling reason to bother to do this.)
583 The main declaration needed is the mallinfo struct that is returned
584 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
585 bunch of fields, most of which are not even meaningful in this
586 version of malloc. Some of these fields are are instead filled by
587 mallinfo() with other numbers that might possibly be of interest.
589 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
590 /usr/include/malloc.h file that includes a declaration of struct
591 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
592 version is declared below. These must be precisely the same for
597 /* #define HAVE_USR_INCLUDE_MALLOC_H */
599 #if HAVE_USR_INCLUDE_MALLOC_H
600 # include "/usr/include/malloc.h"
605 # include "ptmalloc.h"
611 #ifndef DEFAULT_TRIM_THRESHOLD
612 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
616 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
617 to keep before releasing via malloc_trim in free().
619 Automatic trimming is mainly useful in long-lived programs.
620 Because trimming via sbrk can be slow on some systems, and can
621 sometimes be wasteful (in cases where programs immediately
622 afterward allocate more large chunks) the value should be high
623 enough so that your overall system performance would improve by
626 The trim threshold and the mmap control parameters (see below)
627 can be traded off with one another. Trimming and mmapping are
628 two different ways of releasing unused memory back to the
629 system. Between these two, it is often possible to keep
630 system-level demands of a long-lived program down to a bare
631 minimum. For example, in one test suite of sessions measuring
632 the XF86 X server on Linux, using a trim threshold of 128K and a
633 mmap threshold of 192K led to near-minimal long term resource
636 If you are using this malloc in a long-lived program, it should
637 pay to experiment with these values. As a rough guide, you
638 might set to a value close to the average size of a process
639 (program) running on your system. Releasing this much memory
640 would allow such a process to run in memory. Generally, it's
641 worth it to tune for trimming rather tham memory mapping when a
642 program undergoes phases where several large chunks are
643 allocated and released in ways that can reuse each other's
644 storage, perhaps mixed with phases where there are no such
645 chunks at all. And in well-behaved long-lived programs,
646 controlling release of large blocks via trimming versus mapping
649 However, in most programs, these parameters serve mainly as
650 protection against the system-level effects of carrying around
651 massive amounts of unneeded memory. Since frequent calls to
652 sbrk, mmap, and munmap otherwise degrade performance, the default
653 parameters are set to relatively high values that serve only as
656 The default trim value is high enough to cause trimming only in
657 fairly extreme (by current memory consumption standards) cases.
658 It must be greater than page size to have any useful effect. To
659 disable trimming completely, you can set to (unsigned long)(-1);
665 #ifndef DEFAULT_TOP_PAD
666 #define DEFAULT_TOP_PAD (0)
670 M_TOP_PAD is the amount of extra `padding' space to allocate or
671 retain whenever sbrk is called. It is used in two ways internally:
673 * When sbrk is called to extend the top of the arena to satisfy
674 a new malloc request, this much padding is added to the sbrk
677 * When malloc_trim is called automatically from free(),
678 it is used as the `pad' argument.
680 In both cases, the actual amount of padding is rounded
681 so that the end of the arena is always a system page boundary.
683 The main reason for using padding is to avoid calling sbrk so
684 often. Having even a small pad greatly reduces the likelihood
685 that nearly every malloc request during program start-up (or
686 after trimming) will invoke sbrk, which needlessly wastes
689 Automatic rounding-up to page-size units is normally sufficient
690 to avoid measurable overhead, so the default is 0. However, in
691 systems where sbrk is relatively slow, it can pay to increase
692 this value, at the expense of carrying around more memory than
698 #ifndef DEFAULT_MMAP_THRESHOLD
699 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
704 M_MMAP_THRESHOLD is the request size threshold for using mmap()
705 to service a request. Requests of at least this size that cannot
706 be allocated using already-existing space will be serviced via mmap.
707 (If enough normal freed space already exists it is used instead.)
709 Using mmap segregates relatively large chunks of memory so that
710 they can be individually obtained and released from the host
711 system. A request serviced through mmap is never reused by any
712 other request (at least not directly; the system may just so
713 happen to remap successive requests to the same locations).
715 Segregating space in this way has the benefit that mmapped space
716 can ALWAYS be individually released back to the system, which
717 helps keep the system level memory demands of a long-lived
718 program low. Mapped memory can never become `locked' between
719 other chunks, as can happen with normally allocated chunks, which
720 menas that even trimming via malloc_trim would not release them.
722 However, it has the disadvantages that:
724 1. The space cannot be reclaimed, consolidated, and then
725 used to service later requests, as happens with normal chunks.
726 2. It can lead to more wastage because of mmap page alignment
728 3. It causes malloc performance to be more dependent on host
729 system memory management support routines which may vary in
730 implementation quality and may impose arbitrary
731 limitations. Generally, servicing a request via normal
732 malloc steps is faster than going through a system's mmap.
734 All together, these considerations should lead you to use mmap
735 only for relatively large requests.
742 #ifndef DEFAULT_MMAP_MAX
744 #define DEFAULT_MMAP_MAX (1024)
746 #define DEFAULT_MMAP_MAX (0)
751 M_MMAP_MAX is the maximum number of requests to simultaneously
752 service using mmap. This parameter exists because:
754 1. Some systems have a limited number of internal tables for
756 2. In most systems, overreliance on mmap can degrade overall
758 3. If a program allocates many large regions, it is probably
759 better off using normal sbrk-based allocation routines that
760 can reclaim and reallocate normal heap memory. Using a
761 small value allows transition into this mode after the
762 first few allocations.
764 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
765 the default value is 0, and attempts to set it to non-zero values
766 in mallopt will fail.
771 #define HEAP_MIN_SIZE (32*1024)
772 #define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
774 /* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
775 that are dynamically created for multi-threaded programs. The
776 maximum size must be a power of two, for fast determination of
777 which heap belongs to a chunk. It should be much larger than
778 the mmap threshold, so that requests with a size just below that
779 threshold can be fulfilled without creating too many heaps.
785 #define THREAD_STATS 0
788 /* If THREAD_STATS is non-zero, some statistics on mutex locking are
794 Special defines for the Linux/GNU C library.
803 Void_t * __default_morecore (ptrdiff_t);
804 static Void_t *(*__morecore)(ptrdiff_t) = __default_morecore;
808 Void_t * __default_morecore ();
809 static Void_t *(*__morecore)() = __default_morecore;
813 #define MORECORE (*__morecore)
814 #define MORECORE_FAILURE 0
815 #define MORECORE_CLEARS 1
820 extern Void_t* sbrk(ptrdiff_t);
822 extern Void_t* sbrk();
826 #define MORECORE sbrk
829 #ifndef MORECORE_FAILURE
830 #define MORECORE_FAILURE -1
833 #ifndef MORECORE_CLEARS
834 #define MORECORE_CLEARS 1
839 #if 0 && defined(_LIBC)
841 #define cALLOc __libc_calloc
842 #define fREe __libc_free
843 #define mALLOc __libc_malloc
844 #define mEMALIGn __libc_memalign
845 #define rEALLOc __libc_realloc
846 #define vALLOc __libc_valloc
847 #define pvALLOc __libc_pvalloc
848 #define mALLINFo __libc_mallinfo
849 #define mALLOPt __libc_mallopt
851 #pragma weak calloc = __libc_calloc
852 #pragma weak free = __libc_free
853 #pragma weak cfree = __libc_free
854 #pragma weak malloc = __libc_malloc
855 #pragma weak memalign = __libc_memalign
856 #pragma weak realloc = __libc_realloc
857 #pragma weak valloc = __libc_valloc
858 #pragma weak pvalloc = __libc_pvalloc
859 #pragma weak mallinfo = __libc_mallinfo
860 #pragma weak mallopt = __libc_mallopt
864 #define cALLOc calloc
866 #define mALLOc malloc
867 #define mEMALIGn memalign
868 #define rEALLOc realloc
869 #define vALLOc valloc
870 #define pvALLOc pvalloc
871 #define mALLINFo mallinfo
872 #define mALLOPt mallopt
876 /* Public routines */
881 void ptmalloc_init(void);
883 Void_t* mALLOc(size_t);
885 Void_t* rEALLOc(Void_t*, size_t);
886 Void_t* mEMALIGn(size_t, size_t);
887 Void_t* vALLOc(size_t);
888 Void_t* pvALLOc(size_t);
889 Void_t* cALLOc(size_t, size_t);
891 int malloc_trim(size_t);
892 size_t malloc_usable_size(Void_t*);
893 void malloc_stats(void);
894 int mALLOPt(int, int);
895 struct mallinfo mALLINFo(void);
898 void ptmalloc_init();
909 size_t malloc_usable_size();
912 struct mallinfo mALLINFo();
917 }; /* end of extern "C" */
920 #if !defined(NO_THREADS) && !HAVE_MMAP
921 "Can't have threads support without mmap"
932 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
933 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
934 struct malloc_chunk* fd; /* double links -- used only if free. */
935 struct malloc_chunk* bk;
938 typedef struct malloc_chunk* mchunkptr;
942 malloc_chunk details:
944 (The following includes lightly edited explanations by Colin Plumb.)
946 Chunks of memory are maintained using a `boundary tag' method as
947 described in e.g., Knuth or Standish. (See the paper by Paul
948 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
949 survey of such techniques.) Sizes of free chunks are stored both
950 in the front of each chunk and at the end. This makes
951 consolidating fragmented chunks into bigger chunks very fast. The
952 size fields also hold bits representing whether chunks are free or
955 An allocated chunk looks like this:
958 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
959 | Size of previous chunk, if allocated | |
960 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
961 | Size of chunk, in bytes |P|
962 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
963 | User data starts here... .
965 . (malloc_usable_space() bytes) .
967 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
969 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
972 Where "chunk" is the front of the chunk for the purpose of most of
973 the malloc code, but "mem" is the pointer that is returned to the
974 user. "Nextchunk" is the beginning of the next contiguous chunk.
976 Chunks always begin on even word boundries, so the mem portion
977 (which is returned to the user) is also on an even word boundary, and
978 thus double-word aligned.
980 Free chunks are stored in circular doubly-linked lists, and look like this:
982 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
983 | Size of previous chunk |
984 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
985 `head:' | Size of chunk, in bytes |P|
986 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
987 | Forward pointer to next chunk in list |
988 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
989 | Back pointer to previous chunk in list |
990 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
991 | Unused space (may be 0 bytes long) .
994 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
995 `foot:' | Size of chunk, in bytes |
996 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
998 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
999 chunk size (which is always a multiple of two words), is an in-use
1000 bit for the *previous* chunk. If that bit is *clear*, then the
1001 word before the current chunk size contains the previous chunk
1002 size, and can be used to find the front of the previous chunk.
1003 (The very first chunk allocated always has this bit set,
1004 preventing access to non-existent (or non-owned) memory.)
1006 Note that the `foot' of the current chunk is actually represented
1007 as the prev_size of the NEXT chunk. (This makes it easier to
1008 deal with alignments etc).
1010 The two exceptions to all this are
1012 1. The special chunk `top', which doesn't bother using the
1013 trailing size field since there is no
1014 next contiguous chunk that would have to index off it. (After
1015 initialization, `top' is forced to always exist. If it would
1016 become less than MINSIZE bytes long, it is replenished via
1019 2. Chunks allocated via mmap, which have the second-lowest-order
1020 bit (IS_MMAPPED) set in their size fields. Because they are
1021 never merged or traversed from any other chunk, they have no
1022 foot size or inuse information.
1024 Available chunks are kept in any of several places (all declared below):
1026 * `av': An array of chunks serving as bin headers for consolidated
1027 chunks. Each bin is doubly linked. The bins are approximately
1028 proportionally (log) spaced. There are a lot of these bins
1029 (128). This may look excessive, but works very well in
1030 practice. All procedures maintain the invariant that no
1031 consolidated chunk physically borders another one. Chunks in
1032 bins are kept in size order, with ties going to the
1033 approximately least recently used chunk.
1035 The chunks in each bin are maintained in decreasing sorted order by
1036 size. This is irrelevant for the small bins, which all contain
1037 the same-sized chunks, but facilitates best-fit allocation for
1038 larger chunks. (These lists are just sequential. Keeping them in
1039 order almost never requires enough traversal to warrant using
1040 fancier ordered data structures.) Chunks of the same size are
1041 linked with the most recently freed at the front, and allocations
1042 are taken from the back. This results in LRU or FIFO allocation
1043 order, which tends to give each chunk an equal opportunity to be
1044 consolidated with adjacent freed chunks, resulting in larger free
1045 chunks and less fragmentation.
1047 * `top': The top-most available chunk (i.e., the one bordering the
1048 end of available memory) is treated specially. It is never
1049 included in any bin, is used only if no other chunk is
1050 available, and is released back to the system if it is very
1051 large (see M_TRIM_THRESHOLD).
1053 * `last_remainder': A bin holding only the remainder of the
1054 most recently split (non-top) chunk. This bin is checked
1055 before other non-fitting chunks, so as to provide better
1056 locality for runs of sequentially allocated chunks.
1058 * Implicitly, through the host system's memory mapping tables.
1059 If supported, requests greater than a threshold are usually
1060 serviced via calls to mmap, and then later released via munmap.
1067 The bins are an array of pairs of pointers serving as the
1068 heads of (initially empty) doubly-linked lists of chunks, laid out
1069 in a way so that each pair can be treated as if it were in a
1070 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1071 and chunks are the same).
1073 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1074 8 bytes apart. Larger bins are approximately logarithmically
1075 spaced. (See the table below.)
1083 4 bins of size 32768
1084 2 bins of size 262144
1085 1 bin of size what's left
1087 There is actually a little bit of slop in the numbers in bin_index
1088 for the sake of speed. This makes no difference elsewhere.
1090 The special chunks `top' and `last_remainder' get their own bins,
1091 (this is implemented via yet more trickery with the av array),
1092 although `top' is never properly linked to its bin since it is
1093 always handled specially.
1097 #define NAV 128 /* number of bins */
1099 typedef struct malloc_chunk* mbinptr;
1101 /* An arena is a configuration of malloc_chunks together with an array
1102 of bins. With multiple threads, it must be locked via a mutex
1103 before changing its data structures. One or more `heaps' are
1104 associated with each arena, except for the main_arena, which is
1105 associated only with the `main heap', i.e. the conventional free
1106 store obtained with calls to MORECORE() (usually sbrk). The `av'
1107 array is never mentioned directly in the code, but instead used via
1108 bin access macros. */
1110 typedef struct _arena {
1111 mbinptr av[2*NAV + 2];
1112 struct _arena *next;
1115 long stat_lock_direct, stat_lock_loop, stat_lock_wait;
1121 /* A heap is a single contiguous memory region holding (coalescable)
1122 malloc_chunks. It is allocated with mmap() and always starts at an
1123 address aligned to HEAP_MAX_SIZE. Not used unless compiling for
1124 multiple threads. */
1126 typedef struct _heap_info {
1127 arena *ar_ptr; /* Arena for this heap. */
1128 struct _heap_info *prev; /* Previous heap. */
1129 size_t size; /* Current size in bytes. */
1130 size_t pad; /* Make sure the following data is properly aligned. */
1135 Static functions (forward declarations)
1139 static void chunk_free(arena *ar_ptr, mchunkptr p);
1140 static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size);
1141 static int main_trim(size_t pad);
1143 static int heap_trim(heap_info *heap, size_t pad);
1146 static void chunk_free();
1147 static mchunkptr chunk_alloc();
1148 static int main_trim();
1150 static int heap_trim();
1156 /* sizes, alignments */
1158 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1159 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1160 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1161 #define MINSIZE (sizeof(struct malloc_chunk))
1163 /* conversion from malloc headers to user pointers, and back */
1165 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1166 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1168 /* pad request bytes into a usable size */
1170 #define request2size(req) \
1171 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1172 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1173 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1175 /* Check if m has acceptable alignment */
1177 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1183 Physical chunk operations
1187 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1189 #define PREV_INUSE 0x1
1191 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1193 #define IS_MMAPPED 0x2
1195 /* Bits to mask off when extracting size */
1197 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1200 /* Ptr to next physical malloc_chunk. */
1202 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1204 /* Ptr to previous physical malloc_chunk */
1206 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1209 /* Treat space at ptr + offset as a chunk */
1211 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1217 Dealing with use bits
1220 /* extract p's inuse bit */
1223 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1225 /* extract inuse bit of previous chunk */
1227 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1229 /* check for mmap()'ed chunk */
1231 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1233 /* set/clear chunk as in use without otherwise disturbing */
1235 #define set_inuse(p) \
1236 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1238 #define clear_inuse(p) \
1239 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1241 /* check/set/clear inuse bits in known places */
1243 #define inuse_bit_at_offset(p, s)\
1244 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1246 #define set_inuse_bit_at_offset(p, s)\
1247 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1249 #define clear_inuse_bit_at_offset(p, s)\
1250 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1256 Dealing with size fields
1259 /* Get size, ignoring use bits */
1261 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1263 /* Set size at head, without disturbing its use bit */
1265 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1267 /* Set size/use ignoring previous bits in header */
1269 #define set_head(p, s) ((p)->size = (s))
1271 /* Set size at footer (only when chunk is not in use) */
1273 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1281 #define bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i) + 2]) - 2*SIZE_SZ))
1282 #define init_bin(a, i) ((a)->av[2*i+2] = (a)->av[2*i+3] = bin_at((a), i))
1283 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1284 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1287 The first 2 bins are never indexed. The corresponding av cells are instead
1288 used for bookkeeping. This is not to save space, but to simplify
1289 indexing, maintain locality, and avoid some initialization tests.
1292 #define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */
1293 #define top(a) (bin_at(a,0)->fd) /* The topmost chunk */
1294 #define last_remainder(a) (bin_at(a,1)) /* remainder from last split */
1297 Because top initially points to its own bin with initial
1298 zero size, thus forcing extension on the first malloc request,
1299 we avoid having any special code in malloc to check whether
1300 it even exists yet. But we still need to in malloc_extend_top.
1303 #define initial_top(a) ((mchunkptr)bin_at(a, 0))
1307 /* field-extraction macros */
1309 #define first(b) ((b)->fd)
1310 #define last(b) ((b)->bk)
1316 #define bin_index(sz) \
1317 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1318 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1319 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1320 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1321 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1322 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1325 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1326 identically sized chunks. This is exploited in malloc.
1329 #define MAX_SMALLBIN 63
1330 #define MAX_SMALLBIN_SIZE 512
1331 #define SMALLBIN_WIDTH 8
1333 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1336 Requests are `small' if both the corresponding and the next bin are small
1339 #define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1344 To help compensate for the large number of bins, a one-level index
1345 structure is used for bin-by-bin searching. `binblocks' is a
1346 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1347 have any (possibly) non-empty bins, so they can be skipped over
1348 all at once during during traversals. The bits are NOT always
1349 cleared as soon as all bins in a block are empty, but instead only
1350 when all are noticed to be empty during traversal in malloc.
1353 #define BINBLOCKWIDTH 4 /* bins per block */
1355 /* bin<->block macros */
1357 #define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
1358 #define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii))
1359 #define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
1364 /* Static bookkeeping data */
1366 /* Helper macro to initialize bins */
1367 #define IAV(i) bin_at(&main_arena, i), bin_at(&main_arena, i)
1369 static arena main_arena = {
1372 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1373 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1374 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1375 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1376 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1377 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1378 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1379 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1380 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1381 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1382 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1383 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1384 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1385 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1386 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1387 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1392 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
1394 MUTEX_INITIALIZER /* mutex */
1399 /* Thread specific data */
1402 static tsd_key_t arena_key;
1403 static mutex_t list_lock = MUTEX_INITIALIZER;
1407 static int stat_n_heaps = 0;
1408 #define THREAD_STAT(x) x
1410 #define THREAD_STAT(x) do ; while(0)
1413 /* variables holding tunable values */
1415 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1416 static unsigned long top_pad = DEFAULT_TOP_PAD;
1417 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1418 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1420 /* The first value returned from sbrk */
1421 static char* sbrk_base = (char*)(-1);
1423 /* The maximum memory obtained from system via sbrk */
1424 static unsigned long max_sbrked_mem = 0;
1426 /* The maximum via either sbrk or mmap (too difficult to track with threads) */
1428 static unsigned long max_total_mem = 0;
1431 /* The total memory obtained from system via sbrk */
1432 #define sbrked_mem (main_arena.size)
1434 /* Tracking mmaps */
1436 static unsigned int n_mmaps = 0;
1437 static unsigned int max_n_mmaps = 0;
1438 static unsigned long mmapped_mem = 0;
1439 static unsigned long max_mmapped_mem = 0;
1445 /* Initialization routine. */
1447 static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor));
1450 ptmalloc_init __MALLOC_P((void))
1453 ptmalloc_init __MALLOC_P((void))
1456 static int first = 1;
1459 /* Initialize the pthreads interface. */
1460 if (__pthread_initialize != NULL)
1461 __pthread_initialize();
1467 mutex_init(&main_arena.mutex);
1468 mutex_init(&list_lock);
1469 tsd_key_create(&arena_key, NULL);
1470 tsd_setspecific(arena_key, (Void_t *)&main_arena);
1479 /* Routines dealing with mmap(). */
1483 #ifndef MAP_ANONYMOUS
1485 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1487 #define MMAP(size, prot) ((dev_zero_fd < 0) ? \
1488 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1489 mmap(0, (size), (prot), MAP_PRIVATE, dev_zero_fd, 0)) : \
1490 mmap(0, (size), (prot), MAP_PRIVATE, dev_zero_fd, 0))
1494 #define MMAP(size, prot) \
1495 (mmap(0, (size), (prot), MAP_PRIVATE|MAP_ANONYMOUS, -1, 0))
1500 static mchunkptr mmap_chunk(size_t size)
1502 static mchunkptr mmap_chunk(size) size_t size;
1505 size_t page_mask = malloc_getpagesize - 1;
1508 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1510 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1511 * there is no following chunk whose prev_size field could be used.
1513 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1515 p = (mchunkptr)MMAP(size, PROT_READ|PROT_WRITE);
1516 if(p == (mchunkptr)-1) return 0;
1519 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1521 /* We demand that eight bytes into a page must be 8-byte aligned. */
1522 assert(aligned_OK(chunk2mem(p)));
1524 /* The offset to the start of the mmapped region is stored
1525 * in the prev_size field of the chunk; normally it is zero,
1526 * but that can be changed in memalign().
1529 set_head(p, size|IS_MMAPPED);
1531 mmapped_mem += size;
1532 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1533 max_mmapped_mem = mmapped_mem;
1535 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1536 max_total_mem = mmapped_mem + sbrked_mem;
1542 static void munmap_chunk(mchunkptr p)
1544 static void munmap_chunk(p) mchunkptr p;
1547 INTERNAL_SIZE_T size = chunksize(p);
1550 assert (chunk_is_mmapped(p));
1551 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1552 assert((n_mmaps > 0));
1553 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1556 mmapped_mem -= (size + p->prev_size);
1558 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1560 /* munmap returns non-zero on failure */
1567 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1569 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1572 size_t page_mask = malloc_getpagesize - 1;
1573 INTERNAL_SIZE_T offset = p->prev_size;
1574 INTERNAL_SIZE_T size = chunksize(p);
1577 assert (chunk_is_mmapped(p));
1578 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1579 assert((n_mmaps > 0));
1580 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1582 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1583 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1585 cp = (char *)mremap((char *)p - offset, size + offset, new_size,
1588 if (cp == (char *)-1) return 0;
1590 p = (mchunkptr)(cp + offset);
1592 assert(aligned_OK(chunk2mem(p)));
1594 assert((p->prev_size == offset));
1595 set_head(p, (new_size - offset)|IS_MMAPPED);
1597 mmapped_mem -= size + offset;
1598 mmapped_mem += new_size;
1599 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1600 max_mmapped_mem = mmapped_mem;
1602 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1603 max_total_mem = mmapped_mem + sbrked_mem;
1608 #endif /* HAVE_MREMAP */
1610 #endif /* HAVE_MMAP */
1614 /* Managing heaps and arenas (for concurrent threads) */
1618 /* Create a new heap. size is automatically rounded up to a multiple
1619 of the page size. */
1623 new_heap(size_t size)
1625 new_heap(size) size_t size;
1628 size_t page_mask = malloc_getpagesize - 1;
1633 if(size < HEAP_MIN_SIZE)
1634 size = HEAP_MIN_SIZE;
1635 size = (size + page_mask) & ~page_mask;
1636 if(size > HEAP_MAX_SIZE)
1638 p1 = (char *)MMAP(HEAP_MAX_SIZE<<1, PROT_NONE);
1639 if(p1 == (char *)-1)
1641 p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1));
1644 munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul);
1645 if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) {
1646 munmap(p2, HEAP_MAX_SIZE);
1649 h = (heap_info *)p2;
1651 THREAD_STAT(stat_n_heaps++);
1655 /* Grow or shrink a heap. size is automatically rounded up to a
1656 multiple of the page size if it is positive. */
1660 grow_heap(heap_info *h, long diff)
1662 grow_heap(h, diff) heap_info *h; long diff;
1665 size_t page_mask = malloc_getpagesize - 1;
1669 diff = (diff + page_mask) & ~page_mask;
1670 new_size = (long)h->size + diff;
1671 if(new_size > HEAP_MAX_SIZE)
1673 if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0)
1676 new_size = (long)h->size + diff;
1677 if(new_size < (long)sizeof(*h))
1679 if(mprotect((char *)h + new_size, -diff, PROT_NONE) != 0)
1686 /* Delete a heap. */
1688 #define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
1690 /* arena_get() acquires an arena and locks the corresponding mutex.
1691 First, try the one last locked successfully by this thread. (This
1692 is the common case and handled with a macro for speed.) Then, loop
1693 over the singly linked list of arenas. If no arena is readily
1694 available, create a new one. */
1696 #define arena_get(ptr, size) do { \
1697 Void_t *vptr = NULL; \
1698 ptr = (arena *)tsd_getspecific(arena_key, vptr); \
1699 if(ptr && !mutex_trylock(&ptr->mutex)) { \
1700 THREAD_STAT(++(ptr->stat_lock_direct)); \
1702 ptr = arena_get2(ptr, (size)); \
1708 arena_get2(arena *a_tsd, size_t size)
1710 arena_get2(a_tsd, size) arena *a_tsd; size_t size;
1717 unsigned long misalign;
1719 /* Check the singly-linked list for unlocked arenas. */
1721 for(a = a_tsd->next; a; a = a->next) {
1722 if(!mutex_trylock(&a->mutex))
1726 for(a = &main_arena; a != a_tsd; a = a->next) {
1727 if(!mutex_trylock(&a->mutex))
1731 /* Nothing immediately available, so generate a new arena. */
1732 h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT));
1735 a = h->ar_ptr = (arena *)(h+1);
1736 for(i=0; i<NAV; i++)
1739 mutex_init(&a->mutex);
1740 i = mutex_lock(&a->mutex); /* remember result */
1742 /* Set up the top chunk, with proper alignment. */
1743 ptr = (char *)(a + 1);
1744 misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK;
1746 ptr += MALLOC_ALIGNMENT - misalign;
1747 top(a) = (mchunkptr)ptr;
1748 set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE);
1750 /* Add the new arena to the list. */
1751 (void)mutex_lock(&list_lock);
1752 a->next = main_arena.next;
1753 main_arena.next = a;
1754 (void)mutex_unlock(&list_lock);
1756 if(i) /* locking failed; keep arena for further attempts later */
1760 THREAD_STAT(++(a->stat_lock_loop));
1761 tsd_setspecific(arena_key, (Void_t *)a);
1765 /* find the heap and corresponding arena for a given ptr */
1767 #define heap_for_ptr(ptr) \
1768 ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
1769 #define arena_for_ptr(ptr) \
1770 (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
1771 &main_arena : heap_for_ptr(ptr)->ar_ptr)
1773 #else /* defined(NO_THREADS) */
1775 /* Without concurrent threads, there is only one arena. */
1777 #define arena_get(ptr, sz) (ptr = &main_arena)
1778 #define arena_for_ptr(ptr) (&main_arena)
1780 #endif /* !defined(NO_THREADS) */
1792 These routines make a number of assertions about the states
1793 of data structures that should be true at all times. If any
1794 are not true, it's very likely that a user program has somehow
1795 trashed memory. (It's also possible that there is a coding error
1796 in malloc. In which case, please report it!)
1800 static void do_check_chunk(arena *ar_ptr, mchunkptr p)
1802 static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
1805 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1807 /* No checkable chunk is mmapped */
1808 assert(!chunk_is_mmapped(p));
1811 if(ar_ptr != &main_arena) {
1812 heap_info *heap = heap_for_ptr(p);
1813 assert(heap->ar_ptr == ar_ptr);
1814 assert((char *)p + sz <= (char *)heap + heap->size);
1819 /* Check for legal address ... */
1820 assert((char*)p >= sbrk_base);
1821 if (p != top(ar_ptr))
1822 assert((char*)p + sz <= (char*)top(ar_ptr));
1824 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1830 static void do_check_free_chunk(arena *ar_ptr, mchunkptr p)
1832 static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
1835 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1836 mchunkptr next = chunk_at_offset(p, sz);
1838 do_check_chunk(ar_ptr, p);
1840 /* Check whether it claims to be free ... */
1843 /* Must have OK size and fields */
1844 assert((long)sz >= (long)MINSIZE);
1845 assert((sz & MALLOC_ALIGN_MASK) == 0);
1846 assert(aligned_OK(chunk2mem(p)));
1847 /* ... matching footer field */
1848 assert(next->prev_size == sz);
1849 /* ... and is fully consolidated */
1850 assert(prev_inuse(p));
1851 assert (next == top(ar_ptr) || inuse(next));
1853 /* ... and has minimally sane links */
1854 assert(p->fd->bk == p);
1855 assert(p->bk->fd == p);
1859 static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p)
1861 static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
1864 mchunkptr next = next_chunk(p);
1865 do_check_chunk(ar_ptr, p);
1867 /* Check whether it claims to be in use ... */
1870 /* ... whether its size is OK (it might be a fencepost) ... */
1871 assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE));
1873 /* ... and is surrounded by OK chunks.
1874 Since more things can be checked with free chunks than inuse ones,
1875 if an inuse chunk borders them and debug is on, it's worth doing them.
1879 mchunkptr prv = prev_chunk(p);
1880 assert(next_chunk(prv) == p);
1881 do_check_free_chunk(ar_ptr, prv);
1883 if (next == top(ar_ptr))
1885 assert(prev_inuse(next));
1886 assert(chunksize(next) >= MINSIZE);
1888 else if (!inuse(next))
1889 do_check_free_chunk(ar_ptr, next);
1894 static void do_check_malloced_chunk(arena *ar_ptr,
1895 mchunkptr p, INTERNAL_SIZE_T s)
1897 static void do_check_malloced_chunk(ar_ptr, p, s)
1898 arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s;
1901 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1904 do_check_inuse_chunk(ar_ptr, p);
1906 /* Legal size ... */
1907 assert((long)sz >= (long)MINSIZE);
1908 assert((sz & MALLOC_ALIGN_MASK) == 0);
1910 assert(room < (long)MINSIZE);
1912 /* ... and alignment */
1913 assert(aligned_OK(chunk2mem(p)));
1916 /* ... and was allocated at front of an available chunk */
1917 assert(prev_inuse(p));
1922 #define check_free_chunk(A,P) do_check_free_chunk(A,P)
1923 #define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
1924 #define check_chunk(A,P) do_check_chunk(A,P)
1925 #define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
1927 #define check_free_chunk(A,P)
1928 #define check_inuse_chunk(A,P)
1929 #define check_chunk(A,P)
1930 #define check_malloced_chunk(A,P,N)
1936 Macro-based internal utilities
1941 Linking chunks in bin lists.
1942 Call these only with variables, not arbitrary expressions, as arguments.
1946 Place chunk p of size s in its bin, in size order,
1947 putting it ahead of others of same size.
1951 #define frontlink(A, P, S, IDX, BK, FD) \
1953 if (S < MAX_SMALLBIN_SIZE) \
1955 IDX = smallbin_index(S); \
1956 mark_binblock(A, IDX); \
1957 BK = bin_at(A, IDX); \
1961 FD->bk = BK->fd = P; \
1965 IDX = bin_index(S); \
1966 BK = bin_at(A, IDX); \
1968 if (FD == BK) mark_binblock(A, IDX); \
1971 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1976 FD->bk = BK->fd = P; \
1981 /* take a chunk off a list */
1983 #define unlink(P, BK, FD) \
1991 /* Place p as the last remainder */
1993 #define link_last_remainder(A, P) \
1995 last_remainder(A)->fd = last_remainder(A)->bk = P; \
1996 P->fd = P->bk = last_remainder(A); \
1999 /* Clear the last_remainder bin */
2001 #define clear_last_remainder(A) \
2002 (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
2009 Extend the top-most chunk by obtaining memory from system.
2010 Main interface to sbrk (but see also malloc_trim).
2014 static void malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb)
2016 static void malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2019 unsigned long pagesz = malloc_getpagesize;
2020 mchunkptr old_top = top(ar_ptr); /* Record state of old top */
2021 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2022 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2025 if(ar_ptr == &main_arena) {
2028 char* brk; /* return value from sbrk */
2029 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2030 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2031 char* new_brk; /* return of 2nd sbrk call */
2032 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2034 /* Pad request with top_pad plus minimal overhead */
2035 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2037 /* If not the first time through, round to preserve page boundary */
2038 /* Otherwise, we need to correct to a page size below anyway. */
2039 /* (We also correct below if an intervening foreign sbrk call.) */
2041 if (sbrk_base != (char*)(-1))
2042 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2044 brk = (char*)(MORECORE (sbrk_size));
2046 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2047 if (brk == (char*)(MORECORE_FAILURE) ||
2048 (brk < old_end && old_top != initial_top(&main_arena)))
2051 sbrked_mem += sbrk_size;
2053 if (brk == old_end) { /* can just add bytes to current top */
2054 top_size = sbrk_size + old_top_size;
2055 set_head(old_top, top_size | PREV_INUSE);
2056 old_top = 0; /* don't free below */
2058 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2061 /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2062 sbrked_mem += brk - (char*)old_end;
2064 /* Guarantee alignment of first new chunk made from this space */
2065 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2066 if (front_misalign > 0) {
2067 correction = (MALLOC_ALIGNMENT) - front_misalign;
2072 /* Guarantee the next brk will be at a page boundary */
2073 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
2075 /* Allocate correction */
2076 new_brk = (char*)(MORECORE (correction));
2077 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2079 sbrked_mem += correction;
2081 top(&main_arena) = (mchunkptr)brk;
2082 top_size = new_brk - brk + correction;
2083 set_head(top(&main_arena), top_size | PREV_INUSE);
2085 if (old_top == initial_top(&main_arena))
2086 old_top = 0; /* don't free below */
2089 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2090 max_sbrked_mem = sbrked_mem;
2092 if ((unsigned long)(mmapped_mem + sbrked_mem) >
2093 (unsigned long)max_total_mem)
2094 max_total_mem = mmapped_mem + sbrked_mem;
2098 } else { /* ar_ptr != &main_arena */
2099 heap_info *old_heap, *heap;
2100 size_t old_heap_size;
2102 if(old_top_size < MINSIZE) /* this should never happen */
2105 /* First try to extend the current heap. */
2106 if(MINSIZE + nb <= old_top_size)
2108 old_heap = heap_for_ptr(old_top);
2109 old_heap_size = old_heap->size;
2110 if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) {
2111 ar_ptr->size += old_heap->size - old_heap_size;
2112 top_size = ((char *)old_heap + old_heap->size) - (char *)old_top;
2113 set_head(old_top, top_size | PREV_INUSE);
2117 /* A new heap must be created. */
2118 heap = new_heap(nb + top_pad + (MINSIZE + sizeof(*heap)));
2121 heap->ar_ptr = ar_ptr;
2122 heap->prev = old_heap;
2123 ar_ptr->size += heap->size;
2125 /* Set up the new top, so we can safely use chunk_free() below. */
2126 top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap));
2127 top_size = heap->size - sizeof(*heap);
2128 set_head(top(ar_ptr), top_size | PREV_INUSE);
2130 #endif /* !defined(NO_THREADS) */
2132 /* We always land on a page boundary */
2133 assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0);
2135 /* Setup fencepost and free the old top chunk. */
2137 /* The fencepost takes at least MINSIZE bytes, because it might
2138 become the top chunk again later. Note that a footer is set
2139 up, too, although the chunk is marked in use. */
2140 old_top_size -= MINSIZE;
2141 set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE);
2142 if(old_top_size >= MINSIZE) {
2143 set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE);
2144 set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
2145 set_head_size(old_top, old_top_size);
2146 chunk_free(ar_ptr, old_top);
2148 set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE);
2149 set_foot(old_top, (old_top_size + 2*SIZE_SZ));
2157 /* Main public routines */
2163 The requested size is first converted into a usable form, `nb'.
2164 This currently means to add 4 bytes overhead plus possibly more to
2165 obtain 8-byte alignment and/or to obtain a size of at least
2166 MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
2167 size. (All fits are considered `exact' if they are within MINSIZE
2170 From there, the first successful of the following steps is taken:
2172 1. The bin corresponding to the request size is scanned, and if
2173 a chunk of exactly the right size is found, it is taken.
2175 2. The most recently remaindered chunk is used if it is big
2176 enough. This is a form of (roving) first fit, used only in
2177 the absence of exact fits. Runs of consecutive requests use
2178 the remainder of the chunk used for the previous such request
2179 whenever possible. This limited use of a first-fit style
2180 allocation strategy tends to give contiguous chunks
2181 coextensive lifetimes, which improves locality and can reduce
2182 fragmentation in the long run.
2184 3. Other bins are scanned in increasing size order, using a
2185 chunk big enough to fulfill the request, and splitting off
2186 any remainder. This search is strictly by best-fit; i.e.,
2187 the smallest (with ties going to approximately the least
2188 recently used) chunk that fits is selected.
2190 4. If large enough, the chunk bordering the end of memory
2191 (`top') is split off. (This use of `top' is in accord with
2192 the best-fit search rule. In effect, `top' is treated as
2193 larger (and thus less well fitting) than any other available
2194 chunk since it can be extended to be as large as necessary
2195 (up to system limitations).
2197 5. If the request size meets the mmap threshold and the
2198 system supports mmap, and there are few enough currently
2199 allocated mmapped regions, and a call to mmap succeeds,
2200 the request is allocated via direct memory mapping.
2202 6. Otherwise, the top of memory is extended by
2203 obtaining more space from the system (normally using sbrk,
2204 but definable to anything else via the MORECORE macro).
2205 Memory is gathered from the system (in system page-sized
2206 units) in a way that allows chunks obtained across different
2207 sbrk calls to be consolidated, but does not require
2208 contiguous memory. Thus, it should be safe to intersperse
2209 mallocs with other sbrk calls.
2212 All allocations are made from the the `lowest' part of any found
2213 chunk. (The implementation invariant is that prev_inuse is
2214 always true of any allocated chunk; i.e., that each allocated
2215 chunk borders either a previously allocated and still in-use chunk,
2216 or the base of its memory arena.)
2221 Void_t* mALLOc(size_t bytes)
2223 Void_t* mALLOc(bytes) size_t bytes;
2227 INTERNAL_SIZE_T nb = request2size(bytes); /* padded request size; */
2230 arena_get(ar_ptr, nb + top_pad);
2233 victim = chunk_alloc(ar_ptr, nb);
2234 (void)mutex_unlock(&ar_ptr->mutex);
2235 return victim ? chunk2mem(victim) : 0;
2240 chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb)
2242 chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2245 mchunkptr victim; /* inspected/selected chunk */
2246 INTERNAL_SIZE_T victim_size; /* its size */
2247 int idx; /* index for bin traversal */
2248 mbinptr bin; /* associated bin */
2249 mchunkptr remainder; /* remainder from a split */
2250 long remainder_size; /* its size */
2251 int remainder_index; /* its bin index */
2252 unsigned long block; /* block traverser bit */
2253 int startidx; /* first bin of a traversed block */
2254 mchunkptr fwd; /* misc temp for linking */
2255 mchunkptr bck; /* misc temp for linking */
2256 mbinptr q; /* misc temp */
2259 /* Check for exact match in a bin */
2261 if (is_small_request(nb)) /* Faster version for small requests */
2263 idx = smallbin_index(nb);
2265 /* No traversal or size check necessary for small bins. */
2267 q = bin_at(ar_ptr, idx);
2270 /* Also scan the next one, since it would have a remainder < MINSIZE */
2278 victim_size = chunksize(victim);
2279 unlink(victim, bck, fwd);
2280 set_inuse_bit_at_offset(victim, victim_size);
2281 check_malloced_chunk(ar_ptr, victim, nb);
2285 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2290 idx = bin_index(nb);
2291 bin = bin_at(ar_ptr, idx);
2293 for (victim = last(bin); victim != bin; victim = victim->bk)
2295 victim_size = chunksize(victim);
2296 remainder_size = victim_size - nb;
2298 if (remainder_size >= (long)MINSIZE) /* too big */
2300 --idx; /* adjust to rescan below after checking last remainder */
2304 else if (remainder_size >= 0) /* exact fit */
2306 unlink(victim, bck, fwd);
2307 set_inuse_bit_at_offset(victim, victim_size);
2308 check_malloced_chunk(ar_ptr, victim, nb);
2317 /* Try to use the last split-off remainder */
2319 if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr))
2321 victim_size = chunksize(victim);
2322 remainder_size = victim_size - nb;
2324 if (remainder_size >= (long)MINSIZE) /* re-split */
2326 remainder = chunk_at_offset(victim, nb);
2327 set_head(victim, nb | PREV_INUSE);
2328 link_last_remainder(ar_ptr, remainder);
2329 set_head(remainder, remainder_size | PREV_INUSE);
2330 set_foot(remainder, remainder_size);
2331 check_malloced_chunk(ar_ptr, victim, nb);
2335 clear_last_remainder(ar_ptr);
2337 if (remainder_size >= 0) /* exhaust */
2339 set_inuse_bit_at_offset(victim, victim_size);
2340 check_malloced_chunk(ar_ptr, victim, nb);
2344 /* Else place in bin */
2346 frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd);
2350 If there are any possibly nonempty big-enough blocks,
2351 search for best fitting chunk by scanning bins in blockwidth units.
2354 if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr))
2357 /* Get to the first marked block */
2359 if ( (block & binblocks(ar_ptr)) == 0)
2361 /* force to an even block boundary */
2362 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2364 while ((block & binblocks(ar_ptr)) == 0)
2366 idx += BINBLOCKWIDTH;
2371 /* For each possibly nonempty block ... */
2374 startidx = idx; /* (track incomplete blocks) */
2375 q = bin = bin_at(ar_ptr, idx);
2377 /* For each bin in this block ... */
2380 /* Find and use first big enough chunk ... */
2382 for (victim = last(bin); victim != bin; victim = victim->bk)
2384 victim_size = chunksize(victim);
2385 remainder_size = victim_size - nb;
2387 if (remainder_size >= (long)MINSIZE) /* split */
2389 remainder = chunk_at_offset(victim, nb);
2390 set_head(victim, nb | PREV_INUSE);
2391 unlink(victim, bck, fwd);
2392 link_last_remainder(ar_ptr, remainder);
2393 set_head(remainder, remainder_size | PREV_INUSE);
2394 set_foot(remainder, remainder_size);
2395 check_malloced_chunk(ar_ptr, victim, nb);
2399 else if (remainder_size >= 0) /* take */
2401 set_inuse_bit_at_offset(victim, victim_size);
2402 unlink(victim, bck, fwd);
2403 check_malloced_chunk(ar_ptr, victim, nb);
2409 bin = next_bin(bin);
2411 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2413 /* Clear out the block bit. */
2415 do /* Possibly backtrack to try to clear a partial block */
2417 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2419 binblocks(ar_ptr) &= ~block;
2424 } while (first(q) == q);
2426 /* Get to the next possibly nonempty block */
2428 if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) )
2430 while ((block & binblocks(ar_ptr)) == 0)
2432 idx += BINBLOCKWIDTH;
2442 /* Try to use top chunk */
2444 /* Require that there be a remainder, ensuring top always exists */
2445 if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2449 /* If big and would otherwise need to extend, try to use mmap instead */
2450 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2451 (victim = mmap_chunk(nb)) != 0)
2456 malloc_extend_top(ar_ptr, nb);
2457 if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2458 return 0; /* propagate failure */
2461 victim = top(ar_ptr);
2462 set_head(victim, nb | PREV_INUSE);
2463 top(ar_ptr) = chunk_at_offset(victim, nb);
2464 set_head(top(ar_ptr), remainder_size | PREV_INUSE);
2465 check_malloced_chunk(ar_ptr, victim, nb);
2479 1. free(0) has no effect.
2481 2. If the chunk was allocated via mmap, it is released via munmap().
2483 3. If a returned chunk borders the current high end of memory,
2484 it is consolidated into the top, and if the total unused
2485 topmost memory exceeds the trim threshold, malloc_trim is
2488 4. Other chunks are consolidated as they arrive, and
2489 placed in corresponding bins. (This includes the case of
2490 consolidating with the current `last_remainder').
2496 void fREe(Void_t* mem)
2498 void fREe(mem) Void_t* mem;
2502 mchunkptr p; /* chunk corresponding to mem */
2504 if (mem == 0) /* free(0) has no effect */
2510 if (chunk_is_mmapped(p)) /* release mmapped memory. */
2517 ar_ptr = arena_for_ptr(p);
2519 if(!mutex_trylock(&ar_ptr->mutex))
2520 ++(ar_ptr->stat_lock_direct);
2522 (void)mutex_lock(&ar_ptr->mutex);
2523 ++(ar_ptr->stat_lock_wait);
2526 (void)mutex_lock(&ar_ptr->mutex);
2528 chunk_free(ar_ptr, p);
2529 (void)mutex_unlock(&ar_ptr->mutex);
2534 chunk_free(arena *ar_ptr, mchunkptr p)
2536 chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2539 INTERNAL_SIZE_T hd = p->size; /* its head field */
2540 INTERNAL_SIZE_T sz; /* its size */
2541 int idx; /* its bin index */
2542 mchunkptr next; /* next contiguous chunk */
2543 INTERNAL_SIZE_T nextsz; /* its size */
2544 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2545 mchunkptr bck; /* misc temp for linking */
2546 mchunkptr fwd; /* misc temp for linking */
2547 int islr; /* track whether merging with last_remainder */
2549 check_inuse_chunk(ar_ptr, p);
2551 sz = hd & ~PREV_INUSE;
2552 next = chunk_at_offset(p, sz);
2553 nextsz = chunksize(next);
2555 if (next == top(ar_ptr)) /* merge with top */
2559 if (!(hd & PREV_INUSE)) /* consolidate backward */
2561 prevsz = p->prev_size;
2562 p = chunk_at_offset(p, -prevsz);
2564 unlink(p, bck, fwd);
2567 set_head(p, sz | PREV_INUSE);
2571 if(ar_ptr == &main_arena) {
2573 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2577 heap_info *heap = heap_for_ptr(p);
2579 assert(heap->ar_ptr == ar_ptr);
2581 /* Try to get rid of completely empty heaps, if possible. */
2582 if((unsigned long)(sz) >= (unsigned long)trim_threshold ||
2583 p == chunk_at_offset(heap, sizeof(*heap)))
2584 heap_trim(heap, top_pad);
2590 set_head(next, nextsz); /* clear inuse bit */
2594 if (!(hd & PREV_INUSE)) /* consolidate backward */
2596 prevsz = p->prev_size;
2597 p = chunk_at_offset(p, -prevsz);
2600 if (p->fd == last_remainder(ar_ptr)) /* keep as last_remainder */
2603 unlink(p, bck, fwd);
2606 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2610 if (!islr && next->fd == last_remainder(ar_ptr))
2611 /* re-insert last_remainder */
2614 link_last_remainder(ar_ptr, p);
2617 unlink(next, bck, fwd);
2620 set_head(p, sz | PREV_INUSE);
2623 frontlink(ar_ptr, p, sz, idx, bck, fwd);
2634 Chunks that were obtained via mmap cannot be extended or shrunk
2635 unless HAVE_MREMAP is defined, in which case mremap is used.
2636 Otherwise, if their reallocation is for additional space, they are
2637 copied. If for less, they are just left alone.
2639 Otherwise, if the reallocation is for additional space, and the
2640 chunk can be extended, it is, else a malloc-copy-free sequence is
2641 taken. There are several different ways that a chunk could be
2642 extended. All are tried:
2644 * Extending forward into following adjacent free chunk.
2645 * Shifting backwards, joining preceding adjacent space
2646 * Both shifting backwards and extending forward.
2647 * Extending into newly sbrked space
2649 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2650 size argument of zero (re)allocates a minimum-sized chunk.
2652 If the reallocation is for less space, and the new request is for
2653 a `small' (<512 bytes) size, then the newly unused space is lopped
2656 The old unix realloc convention of allowing the last-free'd chunk
2657 to be used as an argument to realloc is no longer supported.
2658 I don't know of any programs still relying on this feature,
2659 and allowing it would also allow too many other incorrect
2660 usages of realloc to be sensible.
2667 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2669 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2673 INTERNAL_SIZE_T nb; /* padded request size */
2675 mchunkptr oldp; /* chunk corresponding to oldmem */
2676 INTERNAL_SIZE_T oldsize; /* its size */
2678 mchunkptr newp; /* chunk to return */
2679 INTERNAL_SIZE_T newsize; /* its size */
2680 Void_t* newmem; /* corresponding user mem */
2682 mchunkptr next; /* next contiguous chunk after oldp */
2683 INTERNAL_SIZE_T nextsize; /* its size */
2685 mchunkptr prev; /* previous contiguous chunk before oldp */
2686 INTERNAL_SIZE_T prevsize; /* its size */
2688 mchunkptr remainder; /* holds split off extra space from newp */
2689 INTERNAL_SIZE_T remainder_size; /* its size */
2691 mchunkptr bck; /* misc temp for linking */
2692 mchunkptr fwd; /* misc temp for linking */
2694 #ifdef REALLOC_ZERO_BYTES_FREES
2695 if (bytes == 0) { fREe(oldmem); return 0; }
2699 /* realloc of null is supposed to be same as malloc */
2700 if (oldmem == 0) return mALLOc(bytes);
2702 newp = oldp = mem2chunk(oldmem);
2703 newsize = oldsize = chunksize(oldp);
2706 nb = request2size(bytes);
2709 if (chunk_is_mmapped(oldp))
2712 newp = mremap_chunk(oldp, nb);
2713 if(newp) return chunk2mem(newp);
2715 /* Note the extra SIZE_SZ overhead. */
2716 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2717 /* Must alloc, copy, free. */
2718 newmem = mALLOc(bytes);
2719 if (newmem == 0) return 0; /* propagate failure */
2720 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2726 ar_ptr = arena_for_ptr(oldp);
2728 if(!mutex_trylock(&ar_ptr->mutex))
2729 ++(ar_ptr->stat_lock_direct);
2731 (void)mutex_lock(&ar_ptr->mutex);
2732 ++(ar_ptr->stat_lock_wait);
2735 (void)mutex_lock(&ar_ptr->mutex);
2738 /* As in malloc(), remember this arena for the next allocation. */
2739 tsd_setspecific(arena_key, (Void_t *)ar_ptr);
2741 check_inuse_chunk(ar_ptr, oldp);
2743 if ((long)(oldsize) < (long)(nb))
2746 /* Try expanding forward */
2748 next = chunk_at_offset(oldp, oldsize);
2749 if (next == top(ar_ptr) || !inuse(next))
2751 nextsize = chunksize(next);
2753 /* Forward into top only if a remainder */
2754 if (next == top(ar_ptr))
2756 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2758 newsize += nextsize;
2759 top(ar_ptr) = chunk_at_offset(oldp, nb);
2760 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
2761 set_head_size(oldp, nb);
2762 (void)mutex_unlock(&ar_ptr->mutex);
2763 return chunk2mem(oldp);
2767 /* Forward into next chunk */
2768 else if (((long)(nextsize + newsize) >= (long)(nb)))
2770 unlink(next, bck, fwd);
2771 newsize += nextsize;
2781 /* Try shifting backwards. */
2783 if (!prev_inuse(oldp))
2785 prev = prev_chunk(oldp);
2786 prevsize = chunksize(prev);
2788 /* try forward + backward first to save a later consolidation */
2793 if (next == top(ar_ptr))
2795 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2797 unlink(prev, bck, fwd);
2799 newsize += prevsize + nextsize;
2800 newmem = chunk2mem(newp);
2801 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2802 top(ar_ptr) = chunk_at_offset(newp, nb);
2803 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
2804 set_head_size(newp, nb);
2805 (void)mutex_unlock(&ar_ptr->mutex);
2810 /* into next chunk */
2811 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2813 unlink(next, bck, fwd);
2814 unlink(prev, bck, fwd);
2816 newsize += nextsize + prevsize;
2817 newmem = chunk2mem(newp);
2818 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2824 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2826 unlink(prev, bck, fwd);
2828 newsize += prevsize;
2829 newmem = chunk2mem(newp);
2830 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2837 newp = chunk_alloc (ar_ptr, nb);
2839 if (newp == 0) /* propagate failure */
2842 /* Avoid copy if newp is next chunk after oldp. */
2843 /* (This can only happen when new chunk is sbrk'ed.) */
2845 if ( newp == next_chunk(oldp))
2847 newsize += chunksize(newp);
2852 /* Otherwise copy, free, and exit */
2853 newmem = chunk2mem(newp);
2854 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2855 chunk_free(ar_ptr, oldp);
2856 (void)mutex_unlock(&ar_ptr->mutex);
2861 split: /* split off extra room in old or expanded chunk */
2863 if (newsize - nb >= MINSIZE) /* split off remainder */
2865 remainder = chunk_at_offset(newp, nb);
2866 remainder_size = newsize - nb;
2867 set_head_size(newp, nb);
2868 set_head(remainder, remainder_size | PREV_INUSE);
2869 set_inuse_bit_at_offset(remainder, remainder_size);
2870 chunk_free(ar_ptr, remainder);
2874 set_head_size(newp, newsize);
2875 set_inuse_bit_at_offset(newp, newsize);
2878 check_inuse_chunk(ar_ptr, newp);
2879 (void)mutex_unlock(&ar_ptr->mutex);
2880 return chunk2mem(newp);
2890 memalign requests more than enough space from malloc, finds a spot
2891 within that chunk that meets the alignment request, and then
2892 possibly frees the leading and trailing space.
2894 The alignment argument must be a power of two. This property is not
2895 checked by memalign, so misuse may result in random runtime errors.
2897 8-byte alignment is guaranteed by normal malloc calls, so don't
2898 bother calling memalign with an argument of 8 or less.
2900 Overreliance on memalign is a sure way to fragment space.
2906 Void_t* mEMALIGn(size_t alignment, size_t bytes)
2908 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2912 INTERNAL_SIZE_T nb; /* padded request size */
2913 char* m; /* memory returned by malloc call */
2914 mchunkptr p; /* corresponding chunk */
2915 char* brk; /* alignment point within p */
2916 mchunkptr newp; /* chunk to return */
2917 INTERNAL_SIZE_T newsize; /* its size */
2918 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
2919 mchunkptr remainder; /* spare room at end to split off */
2920 long remainder_size; /* its size */
2922 /* If need less alignment than we give anyway, just relay to malloc */
2924 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2926 /* Otherwise, ensure that it is at least a minimum chunk size */
2928 if (alignment < MINSIZE) alignment = MINSIZE;
2930 /* Call malloc with worst case padding to hit alignment. */
2932 nb = request2size(bytes);
2933 arena_get(ar_ptr, nb + alignment + MINSIZE);
2936 p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE);
2939 (void)mutex_unlock(&ar_ptr->mutex);
2940 return 0; /* propagate failure */
2945 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2948 if(chunk_is_mmapped(p)) {
2949 (void)mutex_unlock(&ar_ptr->mutex);
2950 return chunk2mem(p); /* nothing more to do */
2954 else /* misaligned */
2957 Find an aligned spot inside chunk.
2958 Since we need to give back leading space in a chunk of at
2959 least MINSIZE, if the first calculation places us at
2960 a spot with less than MINSIZE leader, we can move to the
2961 next aligned spot -- we've allocated enough total room so that
2962 this is always possible.
2965 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
2966 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk = brk + alignment;
2968 newp = (mchunkptr)brk;
2969 leadsize = brk - (char*)(p);
2970 newsize = chunksize(p) - leadsize;
2973 if(chunk_is_mmapped(p))
2975 newp->prev_size = p->prev_size + leadsize;
2976 set_head(newp, newsize|IS_MMAPPED);
2977 (void)mutex_unlock(&ar_ptr->mutex);
2978 return chunk2mem(newp);
2982 /* give back leader, use the rest */
2984 set_head(newp, newsize | PREV_INUSE);
2985 set_inuse_bit_at_offset(newp, newsize);
2986 set_head_size(p, leadsize);
2987 chunk_free(ar_ptr, p);
2990 assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2993 /* Also give back spare room at the end */
2995 remainder_size = chunksize(p) - nb;
2997 if (remainder_size >= (long)MINSIZE)
2999 remainder = chunk_at_offset(p, nb);
3000 set_head(remainder, remainder_size | PREV_INUSE);
3001 set_head_size(p, nb);
3002 chunk_free(ar_ptr, remainder);
3005 check_inuse_chunk(ar_ptr, p);
3006 (void)mutex_unlock(&ar_ptr->mutex);
3007 return chunk2mem(p);
3015 valloc just invokes memalign with alignment argument equal
3016 to the page size of the system (or as near to this as can
3017 be figured out from all the includes/defines above.)
3021 Void_t* vALLOc(size_t bytes)
3023 Void_t* vALLOc(bytes) size_t bytes;
3026 return mEMALIGn (malloc_getpagesize, bytes);
3030 pvalloc just invokes valloc for the nearest pagesize
3031 that will accommodate request
3036 Void_t* pvALLOc(size_t bytes)
3038 Void_t* pvALLOc(bytes) size_t bytes;
3041 size_t pagesize = malloc_getpagesize;
3042 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3047 calloc calls malloc, then zeroes out the allocated chunk.
3052 Void_t* cALLOc(size_t n, size_t elem_size)
3054 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
3058 mchunkptr p, oldtop;
3059 INTERNAL_SIZE_T csz, oldtopsize;
3062 INTERNAL_SIZE_T sz = request2size(n * elem_size);
3064 arena_get(ar_ptr, sz);
3068 /* check if expand_top called, in which case don't need to clear */
3070 oldtop = top(ar_ptr);
3071 oldtopsize = chunksize(top(ar_ptr));
3073 p = chunk_alloc (ar_ptr, sz);
3075 /* Only clearing follows, so we can unlock early. */
3076 (void)mutex_unlock(&ar_ptr->mutex);
3084 /* Two optional cases in which clearing not necessary */
3087 if (chunk_is_mmapped(p)) return mem;
3093 if (p == oldtop && csz > oldtopsize)
3095 /* clear only the bytes from non-freshly-sbrked memory */
3100 MALLOC_ZERO(mem, csz - SIZE_SZ);
3107 cfree just calls free. It is needed/defined on some systems
3108 that pair it with calloc, presumably for odd historical reasons.
3114 void cfree(Void_t *mem)
3116 void cfree(mem) Void_t *mem;
3127 Malloc_trim gives memory back to the system (via negative
3128 arguments to sbrk) if there is unused memory at the `high' end of
3129 the malloc pool. You can call this after freeing large blocks of
3130 memory to potentially reduce the system-level memory requirements
3131 of a program. However, it cannot guarantee to reduce memory. Under
3132 some allocation patterns, some large free blocks of memory will be
3133 locked between two used chunks, so they cannot be given back to
3136 The `pad' argument to malloc_trim represents the amount of free
3137 trailing space to leave untrimmed. If this argument is zero,
3138 only the minimum amount of memory to maintain internal data
3139 structures will be left (one page or less). Non-zero arguments
3140 can be supplied to maintain enough trailing space to service
3141 future expected allocations without having to re-obtain memory
3144 Malloc_trim returns 1 if it actually released any memory, else 0.
3149 int malloc_trim(size_t pad)
3151 int malloc_trim(pad) size_t pad;
3156 (void)mutex_lock(&main_arena.mutex);
3157 res = main_trim(pad);
3158 (void)mutex_unlock(&main_arena.mutex);
3162 /* Trim the main arena. */
3166 main_trim(size_t pad)
3168 main_trim(pad) size_t pad;
3171 mchunkptr top_chunk; /* The current top chunk */
3172 long top_size; /* Amount of top-most memory */
3173 long extra; /* Amount to release */
3174 char* current_brk; /* address returned by pre-check sbrk call */
3175 char* new_brk; /* address returned by negative sbrk call */
3177 unsigned long pagesz = malloc_getpagesize;
3179 top_chunk = top(&main_arena);
3180 top_size = chunksize(top_chunk);
3181 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3183 if (extra < (long)pagesz) /* Not enough memory to release */
3186 /* Test to make sure no one else called sbrk */
3187 current_brk = (char*)(MORECORE (0));
3188 if (current_brk != (char*)(top_chunk) + top_size)
3189 return 0; /* Apparently we don't own memory; must fail */
3191 new_brk = (char*)(MORECORE (-extra));
3193 if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */
3194 /* Try to figure out what we have */
3195 current_brk = (char*)(MORECORE (0));
3196 top_size = current_brk - (char*)top_chunk;
3197 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3199 sbrked_mem = current_brk - sbrk_base;
3200 set_head(top_chunk, top_size | PREV_INUSE);
3202 check_chunk(&main_arena, top_chunk);
3205 sbrked_mem -= extra;
3207 /* Success. Adjust top accordingly. */
3208 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3209 check_chunk(&main_arena, top_chunk);
3217 heap_trim(heap_info *heap, size_t pad)
3219 heap_trim(heap, pad) heap_info *heap; size_t pad;
3222 unsigned long pagesz = malloc_getpagesize;
3223 arena *ar_ptr = heap->ar_ptr;
3224 mchunkptr top_chunk = top(ar_ptr), p, bck, fwd;
3225 heap_info *prev_heap;
3226 long new_size, top_size, extra;
3228 /* Can this heap go away completely ? */
3229 while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) {
3230 prev_heap = heap->prev;
3231 p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ));
3232 assert(p->size == (0|PREV_INUSE)); /* must be fencepost */
3234 new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ);
3235 assert(new_size>0 && new_size<(long int)(2*MINSIZE));
3237 new_size += p->prev_size;
3238 assert(new_size>0 && new_size<HEAP_MAX_SIZE);
3239 if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz)
3241 ar_ptr->size -= heap->size;
3244 if(!prev_inuse(p)) { /* consolidate backward */
3246 unlink(p, bck, fwd);
3248 assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0);
3249 assert( ((char*)p + new_size) == ((char*)heap + heap->size) );
3250 top(ar_ptr) = top_chunk = p;
3251 set_head(top_chunk, new_size | PREV_INUSE);
3252 check_chunk(ar_ptr, top_chunk);
3254 top_size = chunksize(top_chunk);
3255 extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz;
3256 if(extra < (long)pagesz)
3258 /* Try to shrink. */
3259 if(grow_heap(heap, -extra) != 0)
3261 ar_ptr->size -= extra;
3263 /* Success. Adjust top accordingly. */
3264 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3265 check_chunk(ar_ptr, top_chunk);
3276 This routine tells you how many bytes you can actually use in an
3277 allocated chunk, which may be more than you requested (although
3278 often not). You can use this many bytes without worrying about
3279 overwriting other allocated objects. Not a particularly great
3280 programming practice, but still sometimes useful.
3285 size_t malloc_usable_size(Void_t* mem)
3287 size_t malloc_usable_size(mem) Void_t* mem;
3297 if(!chunk_is_mmapped(p))
3299 if (!inuse(p)) return 0;
3300 check_inuse_chunk(arena_for_ptr(mem), p);
3301 return chunksize(p) - SIZE_SZ;
3303 return chunksize(p) - 2*SIZE_SZ;
3310 /* Utility to update mallinfo for malloc_stats() and mallinfo() */
3314 malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi)
3316 malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi;
3325 INTERNAL_SIZE_T avail;
3327 (void)mutex_lock(&ar_ptr->mutex);
3328 avail = chunksize(top(ar_ptr));
3329 navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3331 for (i = 1; i < NAV; ++i)
3333 b = bin_at(ar_ptr, i);
3334 for (p = last(b); p != b; p = p->bk)
3337 check_free_chunk(ar_ptr, p);
3338 for (q = next_chunk(p);
3339 q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0;
3341 check_inuse_chunk(ar_ptr, q);
3343 avail += chunksize(p);
3348 mi->arena = ar_ptr->size;
3349 mi->ordblks = navail;
3350 mi->uordblks = ar_ptr->size - avail;
3351 mi->fordblks = avail;
3352 mi->hblks = n_mmaps;
3353 mi->hblkhd = mmapped_mem;
3354 mi->keepcost = chunksize(top(ar_ptr));
3356 (void)mutex_unlock(&ar_ptr->mutex);
3359 #if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3361 /* Print the complete contents of a single heap to stderr. */
3365 dump_heap(heap_info *heap)
3367 dump_heap(heap) heap_info *heap;
3373 fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size);
3374 ptr = (heap->ar_ptr != (arena*)(heap+1)) ?
3375 (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena);
3376 p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) &
3377 ~MALLOC_ALIGN_MASK);
3379 fprintf(stderr, "chunk %p size %10lx", p, (long)p->size);
3380 if(p == top(heap->ar_ptr)) {
3381 fprintf(stderr, " (top)\n");
3383 } else if(p->size == (0|PREV_INUSE)) {
3384 fprintf(stderr, " (fence)\n");
3387 fprintf(stderr, "\n");
3400 For all arenas seperately and in total, prints on stderr the
3401 amount of space obtained from the system, and the current number
3402 of bytes allocated via malloc (or realloc, etc) but not yet
3403 freed. (Note that this is the number of bytes allocated, not the
3404 number requested. It will be larger than the number requested
3405 because of alignment and bookkeeping overhead.) When not compiled
3406 for multiple threads, the maximum amount of allocated memory
3407 (which may be more than current if malloc_trim and/or munmap got
3408 called) is also reported. When using mmap(), prints the maximum
3409 number of simultaneous mmap regions used, too.
3418 unsigned int in_use_b = mmapped_mem, system_b = in_use_b;
3420 long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0;
3423 for(i=0, ar_ptr = &main_arena; ar_ptr; ar_ptr = ar_ptr->next, i++) {
3424 malloc_update_mallinfo(ar_ptr, &mi);
3425 fprintf(stderr, "Arena %d:\n", i);
3426 fprintf(stderr, "system bytes = %10u\n", (unsigned int)mi.arena);
3427 fprintf(stderr, "in use bytes = %10u\n", (unsigned int)mi.uordblks);
3428 system_b += mi.arena;
3429 in_use_b += mi.uordblks;
3431 stat_lock_direct += ar_ptr->stat_lock_direct;
3432 stat_lock_loop += ar_ptr->stat_lock_loop;
3433 stat_lock_wait += ar_ptr->stat_lock_wait;
3435 #if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3436 if(ar_ptr != &main_arena) {
3437 heap_info *heap = heap_for_ptr(top(ar_ptr));
3438 while(heap) { dump_heap(heap); heap = heap->prev; }
3442 fprintf(stderr, "Total (incl. mmap):\n");
3443 fprintf(stderr, "system bytes = %10u\n", system_b);
3444 fprintf(stderr, "in use bytes = %10u\n", in_use_b);
3446 fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem);
3449 fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps);
3452 fprintf(stderr, "heaps created = %10d\n", stat_n_heaps);
3453 fprintf(stderr, "locked directly = %10ld\n", stat_lock_direct);
3454 fprintf(stderr, "locked in loop = %10ld\n", stat_lock_loop);
3455 fprintf(stderr, "locked waiting = %10ld\n", stat_lock_wait);
3456 fprintf(stderr, "locked total = %10ld\n",
3457 stat_lock_direct + stat_lock_loop + stat_lock_wait);
3462 mallinfo returns a copy of updated current mallinfo.
3463 The information reported is for the arena last used by the thread.
3466 struct mallinfo mALLINFo()
3469 Void_t *vptr = NULL;
3471 tsd_getspecific(arena_key, vptr);
3472 malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi);
3482 mallopt is the general SVID/XPG interface to tunable parameters.
3483 The format is to provide a (parameter-number, parameter-value) pair.
3484 mallopt then sets the corresponding parameter to the argument
3485 value if it can (i.e., so long as the value is meaningful),
3486 and returns 1 if successful else 0.
3488 See descriptions of tunable parameters above.
3493 int mALLOPt(int param_number, int value)
3495 int mALLOPt(param_number, value) int param_number; int value;
3498 switch(param_number)
3500 case M_TRIM_THRESHOLD:
3501 trim_threshold = value; return 1;
3503 top_pad = value; return 1;
3504 case M_MMAP_THRESHOLD:
3506 /* Forbid setting the threshold too high. */
3507 if((unsigned long)value > HEAP_MAX_SIZE/2) return 0;
3509 mmap_threshold = value; return 1;
3512 n_mmaps_max = value; return 1;
3514 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3522 #if 0 && defined(_LIBC)
3523 weak_alias (__libc_calloc, calloc)
3524 weak_alias (__libc_free, cfree)
3525 weak_alias (__libc_free, free)
3526 weak_alias (__libc_malloc, malloc)
3527 weak_alias (__libc_memalign, memalign)
3528 weak_alias (__libc_realloc, realloc)
3529 weak_alias (__libc_valloc, valloc)
3530 weak_alias (__libc_pvalloc, pvalloc)
3531 weak_alias (__libc_mallinfo, mallinfo)
3532 weak_alias (__libc_mallopt, mallopt)
3539 V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
3540 * Very minor updates from the released 2.6.4 version.
3541 * Trimmed include file down to exported data structures.
3542 * Changes from H.J. Lu for glibc-2.0.
3544 V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
3545 * Many changes for multiple threads
3546 * Introduced arenas and heaps
3548 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3549 * Added pvalloc, as recommended by H.J. Liu
3550 * Added 64bit pointer support mainly from Wolfram Gloger
3551 * Added anonymously donated WIN32 sbrk emulation
3552 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3553 * malloc_extend_top: fix mask error that caused wastage after
3555 * Add linux mremap support code from HJ Liu
3557 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3558 * Integrated most documentation with the code.
3559 * Add support for mmap, with help from
3560 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3561 * Use last_remainder in more cases.
3562 * Pack bins using idea from colin@nyx10.cs.du.edu
3563 * Use ordered bins instead of best-fit threshhold
3564 * Eliminate block-local decls to simplify tracing and debugging.
3565 * Support another case of realloc via move into top
3566 * Fix error occuring when initial sbrk_base not word-aligned.
3567 * Rely on page size for units instead of SBRK_UNIT to
3568 avoid surprises about sbrk alignment conventions.
3569 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3570 (raymond@es.ele.tue.nl) for the suggestion.
3571 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3572 * More precautions for cases where other routines call sbrk,
3573 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3574 * Added macros etc., allowing use in linux libc from
3575 H.J. Lu (hjl@gnu.ai.mit.edu)
3576 * Inverted this history list
3578 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3579 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3580 * Removed all preallocation code since under current scheme
3581 the work required to undo bad preallocations exceeds
3582 the work saved in good cases for most test programs.
3583 * No longer use return list or unconsolidated bins since
3584 no scheme using them consistently outperforms those that don't
3585 given above changes.
3586 * Use best fit for very large chunks to prevent some worst-cases.
3587 * Added some support for debugging
3589 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3590 * Removed footers when chunks are in use. Thanks to
3591 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3593 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3594 * Added malloc_trim, with help from Wolfram Gloger
3595 (wmglo@Dent.MED.Uni-Muenchen.DE).
3597 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3599 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3600 * realloc: try to expand in both directions
3601 * malloc: swap order of clean-bin strategy;
3602 * realloc: only conditionally expand backwards
3603 * Try not to scavenge used bins
3604 * Use bin counts as a guide to preallocation
3605 * Occasionally bin return list chunks in first scan
3606 * Add a few optimizations from colin@nyx10.cs.du.edu
3608 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3609 * faster bin computation & slightly different binning
3610 * merged all consolidations to one part of malloc proper
3611 (eliminating old malloc_find_space & malloc_clean_bin)
3612 * Scan 2 returns chunks (not just 1)
3613 * Propagate failure in realloc if malloc returns 0
3614 * Add stuff to allow compilation on non-ANSI compilers
3615 from kpv@research.att.com
3617 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3618 * removed potential for odd address access in prev_chunk
3619 * removed dependency on getpagesize.h
3620 * misc cosmetics and a bit more internal documentation
3621 * anticosmetics: mangled names in macros to evade debugger strangeness
3622 * tested on sparc, hp-700, dec-mips, rs6000
3623 with gcc & native cc (hp, dec only) allowing
3624 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3626 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3627 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3628 structure of old version, but most details differ.)