1 /*
2
3 B G E T
4
5 Buffer allocator
6
7 Designed and implemented in April of 1972 by John Walker, based on the
8 Case Algol OPRO$ algorithm implemented in 1966.
9
10 Reimplemented in 1975 by John Walker for the Interdata 70.
11 Reimplemented in 1977 by John Walker for the Marinchip 9900.
12 Reimplemented in 1982 by Duff Kurland for the Intel 8080.
13
14 Portable C version implemented in September of 1990 by an older, wiser
15 instance of the original implementor.
16
17 Souped up and/or weighed down slightly shortly thereafter by Greg
18 Lutz.
19
20 AMIX edition, including the new compaction call-back option, prepared
21 by John Walker in July of 1992.
22
23 Bug in built-in test program fixed, ANSI compiler warnings eradicated,
24 buffer pool validator implemented, and guaranteed repeatable test
25 added by John Walker in October of 1995.
26
27 This program is in the public domain.
28
29 1. This is the book of the generations of Adam. In the day that God
30 created man, in the likeness of God made he him;
31 2. Male and female created he them; and blessed them, and called
32 their name Adam, in the day when they were created.
33 3. And Adam lived an hundred and thirty years, and begat a son in
34 his own likeness, and after his image; and called his name Seth:
35 4. And the days of Adam after he had begotten Seth were eight
36 hundred years: and he begat sons and daughters:
37 5. And all the days that Adam lived were nine hundred and thirty
38 years: and he died.
39 6. And Seth lived an hundred and five years, and begat Enos:
40 7. And Seth lived after he begat Enos eight hundred and seven years,
41 and begat sons and daughters:
42 8. And all the days of Seth were nine hundred and twelve years: and
43 he died.
44 9. And Enos lived ninety years, and begat Cainan:
45 10. And Enos lived after he begat Cainan eight hundred and fifteen
46 years, and begat sons and daughters:
47 11. And all the days of Enos were nine hundred and five years: and
48 he died.
49 12. And Cainan lived seventy years and begat Mahalaleel:
50 13. And Cainan lived after he begat Mahalaleel eight hundred and
51 forty years, and begat sons and daughters:
52 14. And all the days of Cainan were nine hundred and ten years: and
53 he died.
54 15. And Mahalaleel lived sixty and five years, and begat Jared:
55 16. And Mahalaleel lived after he begat Jared eight hundred and
56 thirty years, and begat sons and daughters:
57 17. And all the days of Mahalaleel were eight hundred ninety and
58 five years: and he died.
59 18. And Jared lived an hundred sixty and two years, and he begat
60 Enoch:
61 19. And Jared lived after he begat Enoch eight hundred years, and
62 begat sons and daughters:
63 20. And all the days of Jared were nine hundred sixty and two years:
64 and he died.
65 21. And Enoch lived sixty and five years, and begat Methuselah:
66 22. And Enoch walked with God after he begat Methuselah three
67 hundred years, and begat sons and daughters:
68 23. And all the days of Enoch were three hundred sixty and five
69 years:
70 24. And Enoch walked with God: and he was not; for God took him.
71 25. And Methuselah lived an hundred eighty and seven years, and
72 begat Lamech.
73 26. And Methuselah lived after he begat Lamech seven hundred eighty
74 and two years, and begat sons and daughters:
75 27. And all the days of Methuselah were nine hundred sixty and nine
76 years: and he died.
77 28. And Lamech lived an hundred eighty and two years, and begat a
78 son:
79 29. And he called his name Noah, saying, This same shall comfort us
80 concerning our work and toil of our hands, because of the ground
81 which the LORD hath cursed.
82 30. And Lamech lived after he begat Noah five hundred ninety and
83 five years, and begat sons and daughters:
84 31. And all the days of Lamech were seven hundred seventy and seven
85 years: and he died.
86 32. And Noah was five hundred years old: and Noah begat Shem, Ham,
87 and Japheth.
88
89 And buffers begat buffers, and links begat links, and buffer pools
90 begat links to chains of buffer pools containing buffers, and lo the
91 buffers and links and pools of buffers and pools of links to chains of
92 pools of buffers were fruitful and they multiplied and the Operating
93 System looked down upon them and said that it was Good.
94
95
96 INTRODUCTION
97 ============
98
99 BGET is a comprehensive memory allocation package which is easily
100 configured to the needs of an application. BGET is efficient in
101 both the time needed to allocate and release buffers and in the
102 memory overhead required for buffer pool management. It
103 automatically consolidates contiguous space to minimise
104 fragmentation. BGET is configured by compile-time definitions,
105 Major options include:
106
107 * A built-in test program to exercise BGET and
108 demonstrate how the various functions are used.
109
110 * Allocation by either the "first fit" or "best fit"
111 method.
112
113 * Wiping buffers at release time to catch code which
114 references previously released storage.
115
116 * Built-in routines to dump individual buffers or the
117 entire buffer pool.
118
119 * Retrieval of allocation and pool size statistics.
120
121 * Quantisation of buffer sizes to a power of two to
122 satisfy hardware alignment constraints.
123
124 * Automatic pool compaction, growth, and shrinkage by
125 means of call-backs to user defined functions.
126
127 Applications of BGET can range from storage management in
128 ROM-based embedded programs to providing the framework upon which
129 a multitasking system incorporating garbage collection is
130 constructed. BGET incorporates extensive internal consistency
131 checking using the <assert.h> mechanism; all these checks can be
132 turned off by compiling with NDEBUG defined, yielding a version of
133 BGET with minimal size and maximum speed.
134
135 The basic algorithm underlying BGET has withstood the test of
136 time; more than 25 years have passed since the first
137 implementation of this code. And yet, it is substantially more
138 efficient than the native allocation schemes of many operating
139 systems: the Macintosh and Microsoft Windows to name two, on which
140 programs have obtained substantial speed-ups by layering BGET as
141 an application level memory manager atop the underlying system's.
142
143 BGET has been implemented on the largest mainframes and the lowest
144 of microprocessors. It has served as the core for multitasking
145 operating systems, multi-thread applications, embedded software in
146 data network switching processors, and a host of C programs. And
147 while it has accreted flexibility and additional options over the
148 years, it remains fast, memory efficient, portable, and easy to
149 integrate into your program.
150
151
152 BGET IMPLEMENTATION ASSUMPTIONS
153 ===============================
154
155 BGET is written in as portable a dialect of C as possible. The
156 only fundamental assumption about the underlying hardware
157 architecture is that memory is allocated is a linear array which
158 can be addressed as a vector of C "char" objects. On segmented
159 address space architectures, this generally means that BGET should
160 be used to allocate storage within a single segment (although some
161 compilers simulate linear address spaces on segmented
162 architectures). On segmented architectures, then, BGET buffer
163 pools may not be larger than a segment, but since BGET allows any
164 number of separate buffer pools, there is no limit on the total
165 storage which can be managed, only on the largest individual
166 object which can be allocated. Machines with a linear address
167 architecture, such as the VAX, 680x0, Sparc, MIPS, or the Intel
168 80386 and above in native mode, may use BGET without restriction.
169
170
171 GETTING STARTED WITH BGET
172 =========================
173
174 Although BGET can be configured in a multitude of fashions, there
175 are three basic ways of working with BGET. The functions
176 mentioned below are documented in the following section. Please
177 excuse the forward references which are made in the interest of
178 providing a roadmap to guide you to the BGET functions you're
179 likely to need.
180
181 Embedded Applications
182 ---------------------
183
184 Embedded applications typically have a fixed area of memory
185 dedicated to buffer allocation (often in a separate RAM address
186 space distinct from the ROM that contains the executable code).
187 To use BGET in such an environment, simply call bpool() with the
188 start address and length of the buffer pool area in RAM, then
189 allocate buffers with bget() and release them with brel().
190 Embedded applications with very limited RAM but abundant CPU speed
191 may benefit by configuring BGET for BestFit allocation (which is
192 usually not worth it in other environments).
193
194 Malloc() Emulation
195 ------------------
196
197 If the C library malloc() function is too slow, not present in
198 your development environment (for example, an a native Windows or
199 Macintosh program), or otherwise unsuitable, you can replace it
200 with BGET. Initially define a buffer pool of an appropriate size
201 with bpool()--usually obtained by making a call to the operating
202 system's low-level memory allocator. Then allocate buffers with
203 bget(), bgetz(), and bgetr() (the last two permit the allocation
204 of buffers initialised to zero and [inefficient] re-allocation of
205 existing buffers for compatibility with C library functions).
206 Release buffers by calling brel(). If a buffer allocation request
207 fails, obtain more storage from the underlying operating system,
208 add it to the buffer pool by another call to bpool(), and continue
209 execution.
210
211 Automatic Storage Management
212 ----------------------------
213
214 You can use BGET as your application's native memory manager and
215 implement automatic storage pool expansion, contraction, and
216 optionally application-specific memory compaction by compiling
217 BGET with the BECtl variable defined, then calling bectl() and
218 supplying functions for storage compaction, acquisition, and
219 release, as well as a standard pool expansion increment. All of
220 these functions are optional (although it doesn't make much sense
221 to provide a release function without an acquisition function,
222 does it?). Once the call-back functions have been defined with
223 bectl(), you simply use bget() and brel() to allocate and release
224 storage as before. You can supply an initial buffer pool with
225 bpool() or rely on automatic allocation to acquire the entire
226 pool. When a call on bget() cannot be satisfied, BGET first
227 checks if a compaction function has been supplied. If so, it is
228 called (with the space required to satisfy the allocation request
229 and a sequence number to allow the compaction routine to be called
230 successively without looping). If the compaction function is able
231 to free any storage (it needn't know whether the storage it freed
232 was adequate) it should return a nonzero value, whereupon BGET
233 will retry the allocation request and, if it fails again, call the
234 compaction function again with the next-higher sequence number.
235
236 If the compaction function returns zero, indicating failure to
237 free space, or no compaction function is defined, BGET next tests
238 whether a non-NULL allocation function was supplied to bectl().
239 If so, that function is called with an argument indicating how
240 many bytes of additional space are required. This will be the
241 standard pool expansion increment supplied in the call to bectl()
242 unless the original bget() call requested a buffer larger than
243 this; buffers larger than the standard pool block can be managed
244 "off the books" by BGET in this mode. If the allocation function
245 succeeds in obtaining the storage, it returns a pointer to the new
246 block and BGET expands the buffer pool; if it fails, the
247 allocation request fails and returns NULL to the caller. If a
248 non-NULL release function is supplied, expansion blocks which
249 become totally empty are released to the global free pool by
250 passing their addresses to the release function.
251
252 Equipped with appropriate allocation, release, and compaction
253 functions, BGET can be used as part of very sophisticated memory
254 management strategies, including garbage collection. (Note,
255 however, that BGET is *not* a garbage collector by itself, and
256 that developing such a system requires much additional logic and
257 careful design of the application's memory allocation strategy.)
258
259
260 BGET FUNCTION DESCRIPTIONS
261 ==========================
262
263 Functions implemented in this file (some are enabled by certain of
264 the optional settings below):
265
266 void bpool(void *buffer, bufsize len);
267
268 Create a buffer pool of <len> bytes, using the storage starting at
269 <buffer>. You can call bpool() subsequently to contribute
270 additional storage to the overall buffer pool.
271
272 void *bget(bufsize size);
273
274 Allocate a buffer of <size> bytes. The address of the buffer is
275 returned, or NULL if insufficient memory was available to allocate
276 the buffer.
277
278 void *bgetz(bufsize size);
279
280 Allocate a buffer of <size> bytes and clear it to all zeroes. The
281 address of the buffer is returned, or NULL if insufficient memory
282 was available to allocate the buffer.
283
284 void *bgetr(void *buffer, bufsize newsize);
285
286 Reallocate a buffer previously allocated by bget(), changing its
287 size to <newsize> and preserving all existing data. NULL is
288 returned if insufficient memory is available to reallocate the
289 buffer, in which case the original buffer remains intact.
290
291 void brel(void *buf);
292
293 Return the buffer <buf>, previously allocated by bget(), to the
294 free space pool.
295
296 void bectl(int (*compact)(bufsize sizereq, int sequence),
297 void *(*acquire)(bufsize size),
298 void (*release)(void *buf),
299 bufsize pool_incr);
300
301 Expansion control: specify functions through which the package may
302 compact storage (or take other appropriate action) when an
303 allocation request fails, and optionally automatically acquire
304 storage for expansion blocks when necessary, and release such
305 blocks when they become empty. If <compact> is non-NULL, whenever
306 a buffer allocation request fails, the <compact> function will be
307 called with arguments specifying the number of bytes (total buffer
308 size, including header overhead) required to satisfy the
309 allocation request, and a sequence number indicating the number of
310 consecutive calls on <compact> attempting to satisfy this
311 allocation request. The sequence number is 1 for the first call
312 on <compact> for a given allocation request, and increments on
313 subsequent calls, permitting the <compact> function to take
314 increasingly dire measures in an attempt to free up storage. If
315 the <compact> function returns a nonzero value, the allocation
316 attempt is re-tried. If <compact> returns 0 (as it must if it
317 isn't able to release any space or add storage to the buffer
318 pool), the allocation request fails, which can trigger automatic
319 pool expansion if the <acquire> argument is non-NULL. At the time
320 the <compact> function is called, the state of the buffer
321 allocator is identical to that at the moment the allocation
322 request was made; consequently, the <compact> function may call
323 brel(), bpool(), bstats(), and/or directly manipulate the buffer
324 pool in any manner which would be valid were the application in
325 control. This does not, however, relieve the <compact> function
326 of the need to ensure that whatever actions it takes do not change
327 things underneath the application that made the allocation
328 request. For example, a <compact> function that released a buffer
329 in the process of being reallocated with bgetr() would lead to
330 disaster. Implementing a safe and effective <compact> mechanism
331 requires careful design of an application's memory architecture,
332 and cannot generally be easily retrofitted into existing code.
333
334 If <acquire> is non-NULL, that function will be called whenever an
335 allocation request fails. If the <acquire> function succeeds in
336 allocating the requested space and returns a pointer to the new
337 area, allocation will proceed using the expanded buffer pool. If
338 <acquire> cannot obtain the requested space, it should return NULL
339 and the entire allocation process will fail. <pool_incr>
340 specifies the normal expansion block size. Providing an <acquire>
341 function will cause subsequent bget() requests for buffers too
342 large to be managed in the linked-block scheme (in other words,
343 larger than <pool_incr> minus the buffer overhead) to be satisfied
344 directly by calls to the <acquire> function. Automatic release of
345 empty pool blocks will occur only if all pool blocks in the system
346 are the size given by <pool_incr>.
347
348 void bstats(bufsize *curalloc, bufsize *totfree,
349 bufsize *maxfree, long *nget, long *nrel);
350
351 The amount of space currently allocated is stored into the
352 variable pointed to by <curalloc>. The total free space (sum of
353 all free blocks in the pool) is stored into the variable pointed
354 to by <totfree>, and the size of the largest single block in the
355 free space pool is stored into the variable pointed to by
356 <maxfree>. The variables pointed to by <nget> and <nrel> are
357 filled, respectively, with the number of successful (non-NULL
358 return) bget() calls and the number of brel() calls.
359
360 void bstatse(bufsize *pool_incr, long *npool,
361 long *npget, long *nprel,
362 long *ndget, long *ndrel);
363
364 Extended statistics: The expansion block size will be stored into
365 the variable pointed to by <pool_incr>, or the negative thereof if
366 automatic expansion block releases are disabled. The number of
367 currently active pool blocks will be stored into the variable
368 pointed to by <npool>. The variables pointed to by <npget> and
369 <nprel> will be filled with, respectively, the number of expansion
370 block acquisitions and releases which have occurred. The
371 variables pointed to by <ndget> and <ndrel> will be filled with
372 the number of bget() and brel() calls, respectively, managed
373 through blocks directly allocated by the acquisition and release
374 functions.
375
376 void bufdump(void *buf);
377
378 The buffer pointed to by <buf> is dumped on standard output.
379
380 void bpoold(void *pool, int dumpalloc, int dumpfree);
381
382 All buffers in the buffer pool <pool>, previously initialised by a
383 call on bpool(), are listed in ascending memory address order. If
384 <dumpalloc> is nonzero, the contents of allocated buffers are
385 dumped; if <dumpfree> is nonzero, the contents of free blocks are
386 dumped.
387
388 int bpoolv(void *pool);
389
390 The named buffer pool, previously initialised by a call on
391 bpool(), is validated for bad pointers, overwritten data, etc. If
392 compiled with NDEBUG not defined, any error generates an assertion
393 failure. Otherwise 1 is returned if the pool is valid, 0 if an
394 error is found.
395
396
397 BGET CONFIGURATION
398 ==================
399 */
400
401 /*
402 * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED
403 * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
404 * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
405 * IN NO EVENT SHALL ST BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
406 * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
407 * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
408 * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
409 * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
410 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
411 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
412 */
413
414 /* #define BGET_ENABLE_ALL_OPTIONS */
415 #ifdef BGET_ENABLE_OPTION
416 #define TestProg 20000 /* Generate built-in test program
417 if defined. The value specifies
418 how many buffer allocation attempts
419 the test program should make. */
420
421 #define SizeQuant 4 /* Buffer allocation size quantum:
422 all buffers allocated are a
423 multiple of this size. This
424 MUST be a power of two. */
425
426 #define BufDump 1 /* Define this symbol to enable the
427 bpoold() function which dumps the
428 buffers in a buffer pool. */
429
430 #define BufValid 1 /* Define this symbol to enable the
431 bpoolv() function for validating
432 a buffer pool. */
433
434 #define DumpData 1 /* Define this symbol to enable the
435 bufdump() function which allows
436 dumping the contents of an allocated
437 or free buffer. */
438
439 #define BufStats 1 /* Define this symbol to enable the
440 bstats() function which calculates
441 the total free space in the buffer
442 pool, the largest available
443 buffer, and the total space
444 currently allocated. */
445
446 #define FreeWipe 1 /* Wipe free buffers to a guaranteed
447 pattern of garbage to trip up
448 miscreants who attempt to use
449 pointers into released buffers. */
450
451 #define BestFit 1 /* Use a best fit algorithm when
452 searching for space for an
453 allocation request. This uses
454 memory more efficiently, but
455 allocation will be much slower. */
456
457 #define BECtl 1 /* Define this symbol to enable the
458 bectl() function for automatic
459 pool space control. */
460 #endif
461
462 #include <stdio.h>
463 #include <stdbool.h>
464
465 #ifdef lint
466 #define NDEBUG /* Exits in asserts confuse lint */
467 /* LINTLIBRARY */ /* Don't complain about def, no ref */
468 extern char *sprintf(); /* Sun includes don't define sprintf */
469 #endif
470
471 #include <assert.h>
472 #include <memory.h>
473
474 #ifdef BufDump /* BufDump implies DumpData */
475 #ifndef DumpData
476 #define DumpData 1
477 #endif
478 #endif
479
480 #ifdef DumpData
481 #include <ctype.h>
482 #endif
483
484 #ifdef __KERNEL__
485 #ifdef CFG_CORE_BGET_BESTFIT
486 #define BestFit 1
487 #endif
488 #endif
489
490 /* Declare the interface, including the requested buffer size type,
491 bufsize. */
492
493 #include "bget.h"
494
495 #define MemSize int /* Type for size arguments to memxxx()
496 functions such as memcmp(). */
497
498 /* Queue links */
499
500 struct qlinks {
501 struct bfhead *flink; /* Forward link */
502 struct bfhead *blink; /* Backward link */
503 };
504
505 /* Header in allocated and free buffers */
506
507 struct bhead {
508 bufsize prevfree; /* Relative link back to previous
509 free buffer in memory or 0 if
510 previous buffer is allocated. */
511 bufsize bsize; /* Buffer size: positive if free,
512 negative if allocated. */
513 };
514 #define BH(p) ((struct bhead *) (p))
515
516 /* Header in directly allocated buffers (by acqfcn) */
517
518 struct bdhead {
519 bufsize tsize; /* Total size, including overhead */
520 bufsize offs; /* Offset from allocated buffer */
521 struct bhead bh; /* Common header */
522 };
523 #define BDH(p) ((struct bdhead *) (p))
524
525 /* Header in free buffers */
526
527 struct bfhead {
528 struct bhead bh; /* Common allocated/free header */
529 struct qlinks ql; /* Links on free list */
530 };
531 #define BFH(p) ((struct bfhead *) (p))
532
533 /* Poolset definition */
534 struct bpoolset {
535 struct bfhead freelist;
536 #ifdef BufStats
537 bufsize totalloc; /* Total space currently allocated */
538 long numget; /* Number of bget() calls */
539 long numrel; /* Number of brel() calls */
540 #ifdef BECtl
541 long numpblk; /* Number of pool blocks */
542 long numpget; /* Number of block gets and rels */
543 long numprel;
544 long numdget; /* Number of direct gets and rels */
545 long numdrel;
546 #endif /* BECtl */
547 #endif /* BufStats */
548
549 #ifdef BECtl
550 /* Automatic expansion block management functions */
551
552 int (*compfcn) _((bufsize sizereq, int sequence));
553 void *(*acqfcn) _((bufsize size));
554 void (*relfcn) _((void *buf));
555
556 bufsize exp_incr; /* Expansion block size */
557 bufsize pool_len; /* 0: no bpool calls have been made
558 -1: not all pool blocks are
559 the same size
560 >0: (common) block size for all
561 bpool calls made so far
562 */
563 #endif
564 };
565
566 /* Minimum allocation quantum: */
567
568 #define QLSize (sizeof(struct qlinks))
569 #define SizeQ ((SizeQuant > QLSize) ? SizeQuant : QLSize)
570
571 #define V (void) /* To denote unwanted returned values */
572
573 /* End sentinel: value placed in bsize field of dummy block delimiting
574 end of pool block. The most negative number which will fit in a
575 bufsize, defined in a way that the compiler will accept. */
576
577 #define ESent ((bufsize) (-(((1L << (sizeof(bufsize) * 8 - 2)) - 1) * 2) - 2))
578
buf_get_pos(struct bfhead * bf,bufsize align,bufsize hdr_size,bufsize size)579 static bufsize buf_get_pos(struct bfhead *bf, bufsize align, bufsize hdr_size,
580 bufsize size)
581 {
582 unsigned long buf = 0;
583 bufsize pos = 0;
584
585 if (bf->bh.bsize < size)
586 return -1;
587
588 /*
589 * plus sizeof(struct bhead) and hdr_size since buf will follow just
590 * after a struct bhead and an eventual extra header.
591 */
592 buf = (unsigned long)bf + bf->bh.bsize - size + sizeof(struct bhead) +
593 hdr_size;
594 buf &= ~(align - 1);
595 pos = buf - (unsigned long)bf - sizeof(struct bhead) - hdr_size;
596
597 if (pos == 0) /* exact match */
598 return pos;
599 if (pos >= SizeQ + sizeof(struct bhead)) /* room for an empty buffer */
600 return pos;
601
602 return -1;
603 }
604
605 /* BGET -- Allocate a buffer. */
606
bget(requested_align,hdr_size,requested_size,poolset)607 void *bget(requested_align, hdr_size, requested_size, poolset)
608 bufsize requested_align;
609 bufsize hdr_size;
610 bufsize requested_size;
611 struct bpoolset *poolset;
612 {
613 bufsize align = requested_align;
614 bufsize size = requested_size;
615 bufsize pos;
616 struct bfhead *b;
617 #ifdef BestFit
618 struct bfhead *best;
619 #endif
620 void *buf;
621 #ifdef BECtl
622 int compactseq = 0;
623 #endif
624
625 assert(size > 0);
626 COMPILE_TIME_ASSERT(BGET_HDR_QUANTUM == SizeQ);
627
628 if (align < 0 || (align > 0 && !IS_POWER_OF_TWO((unsigned long)align)))
629 return NULL;
630 if (hdr_size % BGET_HDR_QUANTUM != 0)
631 return NULL;
632
633 if (size < SizeQ) { /* Need at least room for the */
634 size = SizeQ; /* queue links. */
635 }
636 if (align < SizeQ)
637 align = SizeQ;
638 #ifdef SizeQuant
639 #if SizeQuant > 1
640 if (ADD_OVERFLOW(size, SizeQuant - 1, &size))
641 return NULL;
642
643 size = ROUNDDOWN(size, SizeQuant);
644 #endif
645 #endif
646
647 /* Add overhead in allocated buffer to size required. */
648 if (ADD_OVERFLOW(size, sizeof(struct bhead), &size))
649 return NULL;
650 if (ADD_OVERFLOW(size, hdr_size, &size))
651 return NULL;
652
653 #ifdef BECtl
654 /* If a compact function was provided in the call to bectl(), wrap
655 a loop around the allocation process to allow compaction to
656 intervene in case we don't find a suitable buffer in the chain. */
657
658 while (1) {
659 #endif
660 b = poolset->freelist.ql.flink;
661 #ifdef BestFit
662 best = &poolset->freelist;
663 #endif
664
665
666 /* Scan the free list searching for the first buffer big enough
667 to hold the requested size buffer. */
668
669 #ifdef BestFit
670 while (b != &poolset->freelist) {
671 assert(b->bh.prevfree == 0);
672 pos = buf_get_pos(b, align, hdr_size, size);
673 if (pos >= 0) {
674 if ((best == &poolset->freelist) ||
675 (b->bh.bsize < best->bh.bsize)) {
676 best = b;
677 }
678 }
679 b = b->ql.flink; /* Link to next buffer */
680 }
681 b = best;
682 #endif /* BestFit */
683
684 while (b != &poolset->freelist) {
685 pos = buf_get_pos(b, align, hdr_size, size);
686 if (pos >= 0) {
687 struct bhead *b_alloc = BH((char *)b + pos);
688 struct bhead *b_next = BH((char *)b + b->bh.bsize);
689
690 assert(b_next->prevfree == b->bh.bsize);
691
692 /*
693 * Zero the back pointer in the next buffer in memory
694 * to indicate that this buffer is allocated.
695 */
696 b_next->prevfree = 0;
697
698 assert(b->ql.blink->ql.flink == b);
699 assert(b->ql.flink->ql.blink == b);
700
701 if (pos == 0) {
702 /*
703 * Need to allocate from the beginning of this free block.
704 * Unlink the block and mark it as allocated.
705 */
706 b->ql.blink->ql.flink = b->ql.flink;
707 b->ql.flink->ql.blink = b->ql.blink;
708
709 /* Negate size to mark buffer allocated. */
710 b->bh.bsize = -b->bh.bsize;
711 } else {
712 /*
713 * Carve out the memory allocation from the end of this
714 * free block. Negative size to mark buffer allocated.
715 */
716 b_alloc->bsize = -(b->bh.bsize - pos);
717 b_alloc->prevfree = pos;
718 b->bh.bsize = pos;
719 }
720
721 assert(b_alloc->bsize < 0);
722 /*
723 * At this point is b_alloc pointing to the allocated
724 * buffer and b_next at the buffer following. b might be a
725 * free block or a used block now.
726 */
727 if (-b_alloc->bsize - size > SizeQ + sizeof(struct bhead)) {
728 /*
729 * b_alloc has too much unused memory at the
730 * end we need to split the block and register that
731 * last part as free.
732 */
733 b = BFH((char *)b_alloc + size);
734 b->bh.bsize = -b_alloc->bsize - size;
735 b->bh.prevfree = 0;
736 b_alloc->bsize += b->bh.bsize;
737
738 assert(poolset->freelist.ql.blink->ql.flink ==
739 &poolset->freelist);
740 assert(poolset->freelist.ql.flink->ql.blink ==
741 &poolset->freelist);
742 b->ql.flink = &poolset->freelist;
743 b->ql.blink = poolset->freelist.ql.blink;
744 poolset->freelist.ql.blink = b;
745 b->ql.blink->ql.flink = b;
746
747 assert(BH((char *)b + b->bh.bsize) == b_next);
748 b_next->prevfree = b->bh.bsize;
749 }
750
751 #ifdef BufStats
752 poolset->totalloc -= b_alloc->bsize;
753 poolset->numget++; /* Increment number of bget() calls */
754 #endif
755 buf = (char *)b_alloc + sizeof(struct bhead);
756 return buf;
757 }
758 b = b->ql.flink; /* Link to next buffer */
759 }
760 #ifdef BECtl
761
762 /* We failed to find a buffer. If there's a compact function
763 defined, notify it of the size requested. If it returns
764 TRUE, try the allocation again. */
765
766 if ((poolset->compfcn == NULL) ||
767 (!(poolset->compfcn)(size, ++compactseq))) {
768 break;
769 }
770 }
771
772 /* No buffer available with requested size free. */
773
774 /* Don't give up yet -- look in the reserve supply. */
775
776 if (poolset->acqfcn != NULL) {
777 if (size > exp_incr - sizeof(struct bfhead) - align) {
778
779 /* Request is too large to fit in a single expansion
780 block. Try to satisy it by a direct buffer acquisition. */
781 char *p;
782
783 size += sizeof(struct bdhead) - sizeof(struct bhead);
784 if (align > QLSize)
785 size += align;
786 p = poolset->acqfcn(size);
787 if (p != NULL) {
788 struct bdhead *bdh;
789
790 if (align <= QLSize) {
791 bdh = BDH(p);
792 buf = bdh + 1;
793 } else {
794 unsigned long tp = (unsigned long)p;
795
796 tp += sizeof(*bdh) + hdr_size + align;
797 tp &= ~(align - 1);
798 tp -= hdr_size;
799 buf = (void *)tp;
800 bdh = BDH((char *)buf - sizeof(*bdh));
801 }
802
803 /* Mark the buffer special by setting the size field
804 of its header to zero. */
805 bdh->bh.bsize = 0;
806 bdh->bh.prevfree = 0;
807 bdh->tsize = size;
808 bdh->offs = (unsigned long)bdh - (unsigned long)p;
809 #ifdef BufStats
810 poolset->totalloc += size;
811 poolset->numget++; /* Increment number of bget() calls */
812 poolset->numdget++; /* Direct bget() call count */
813 #endif
814 return buf;
815 }
816
817 } else {
818
819 /* Try to obtain a new expansion block */
820
821 void *newpool;
822
823 if ((newpool = poolset->acqfcn((bufsize) exp_incr)) != NULL) {
824 bpool(newpool, exp_incr, poolset);
825 buf = bget(align, hdr_size, requested_size, pool); /* This can't, I say, can't
826 get into a loop. */
827 return buf;
828 }
829 }
830 }
831
832 /* Still no buffer available */
833
834 #endif /* BECtl */
835
836 return NULL;
837 }
838
839 /* BGETZ -- Allocate a buffer and clear its contents to zero. We clear
840 the entire contents of the buffer to zero, not just the
841 region requested by the caller. */
842
bgetz(align,hdr_size,size,poolset)843 void *bgetz(align, hdr_size, size, poolset)
844 bufsize align;
845 bufsize hdr_size;
846 bufsize size;
847 struct bpoolset *poolset;
848 {
849 char *buf = (char *) bget(align, hdr_size, size, poolset);
850
851 if (buf != NULL) {
852 struct bhead *b;
853 bufsize rsize;
854
855 b = BH(buf - sizeof(struct bhead));
856 rsize = -(b->bsize);
857 if (rsize == 0) {
858 struct bdhead *bd;
859
860 bd = BDH(buf - sizeof(struct bdhead));
861 rsize = bd->tsize - sizeof(struct bdhead) - bd->offs;
862 } else {
863 rsize -= sizeof(struct bhead);
864 }
865 assert(rsize >= size);
866 V memset_unchecked(buf, 0, (MemSize) rsize);
867 }
868 return ((void *) buf);
869 }
870
871 /* BGETR -- Reallocate a buffer. This is a minimal implementation,
872 simply in terms of brel() and bget(). It could be
873 enhanced to allow the buffer to grow into adjacent free
874 blocks and to avoid moving data unnecessarily. */
875
bgetr(buf,align,hdr_size,size,poolset)876 void *bgetr(buf, align, hdr_size, size, poolset)
877 void *buf;
878 bufsize align;
879 bufsize hdr_size;
880 bufsize size;
881 struct bpoolset *poolset;
882 {
883 void *nbuf;
884 bufsize osize; /* Old size of buffer */
885 struct bhead *b;
886
887 if ((nbuf = bget(align, hdr_size, size, poolset)) == NULL) { /* Acquire new buffer */
888 return NULL;
889 }
890 if (buf == NULL) {
891 return nbuf;
892 }
893 b = BH(((char *) buf) - sizeof(struct bhead));
894 osize = -b->bsize;
895 #ifdef BECtl
896 if (osize == 0) {
897 /* Buffer acquired directly through acqfcn. */
898 struct bdhead *bd;
899
900 bd = BDH(((char *) buf) - sizeof(struct bdhead));
901 osize = bd->tsize - sizeof(struct bdhead) - bd->offs;
902 } else
903 #endif
904 osize -= sizeof(struct bhead);
905 assert(osize > 0);
906 V memcpy_unchecked((char *) nbuf, (char *) buf, /* Copy the data */
907 (MemSize) ((size < osize) ? size : osize));
908 #ifndef __KERNEL__
909 /* User space reallocations are always zeroed */
910 if (size > osize)
911 V memset_unchecked((char *) nbuf + osize, 0, size - osize);
912 #endif
913 brel(buf, poolset, false /* !wipe */);
914 return nbuf;
915 }
916
917 /* BREL -- Release a buffer. */
918
brel(buf,poolset,wipe)919 void brel(buf, poolset, wipe)
920 void *buf;
921 struct bpoolset *poolset;
922 int wipe;
923 {
924 struct bfhead *b, *bn;
925 char *wipe_start;
926 bufsize wipe_size;
927
928 b = BFH(((char *) buf) - sizeof(struct bhead));
929 #ifdef BufStats
930 poolset->numrel++; /* Increment number of brel() calls */
931 #endif
932 assert(buf != NULL);
933
934 #ifdef FreeWipe
935 wipe = true;
936 #endif
937 #ifdef BECtl
938 if (b->bh.bsize == 0) { /* Directly-acquired buffer? */
939 struct bdhead *bdh;
940
941 bdh = BDH(((char *) buf) - sizeof(struct bdhead));
942 assert(b->bh.prevfree == 0);
943 #ifdef BufStats
944 poolset->totalloc -= bdh->tsize;
945 assert(poolset->totalloc >= 0);
946 poolset->numdrel++; /* Number of direct releases */
947 #endif /* BufStats */
948 if (wipe) {
949 V memset_unchecked((char *) buf, 0x55,
950 (MemSize) (bdh->tsize -
951 sizeof(struct bdhead)));
952 }
953 assert(poolset->relfcn != NULL);
954 poolset->relfcn((char *)buf - sizeof(struct bdhead) - bdh->offs); /* Release it directly. */
955 return;
956 }
957 #endif /* BECtl */
958
959 /* Buffer size must be negative, indicating that the buffer is
960 allocated. */
961
962 if (b->bh.bsize >= 0) {
963 bn = NULL;
964 }
965 assert(b->bh.bsize < 0);
966
967 /* Back pointer in next buffer must be zero, indicating the
968 same thing: */
969
970 assert(BH((char *) b - b->bh.bsize)->prevfree == 0);
971
972 #ifdef BufStats
973 poolset->totalloc += b->bh.bsize;
974 assert(poolset->totalloc >= 0);
975 #endif
976
977 /* If the back link is nonzero, the previous buffer is free. */
978
979 if (b->bh.prevfree != 0) {
980
981 /* The previous buffer is free. Consolidate this buffer with it
982 by adding the length of this buffer to the previous free
983 buffer. Note that we subtract the size in the buffer being
984 released, since it's negative to indicate that the buffer is
985 allocated. */
986
987 register bufsize size = b->bh.bsize;
988
989 /* Only wipe the current buffer, including bfhead. */
990 wipe_start = (char *)b;
991 wipe_size = -size;
992
993 /* Make the previous buffer the one we're working on. */
994 assert(BH((char *) b - b->bh.prevfree)->bsize == b->bh.prevfree);
995 b = BFH(((char *) b) - b->bh.prevfree);
996 b->bh.bsize -= size;
997 } else {
998
999 /* The previous buffer isn't allocated. Insert this buffer
1000 on the free list as an isolated free block. */
1001
1002 assert(poolset->freelist.ql.blink->ql.flink == &poolset->freelist);
1003 assert(poolset->freelist.ql.flink->ql.blink == &poolset->freelist);
1004 b->ql.flink = &poolset->freelist;
1005 b->ql.blink = poolset->freelist.ql.blink;
1006 poolset->freelist.ql.blink = b;
1007 b->ql.blink->ql.flink = b;
1008 b->bh.bsize = -b->bh.bsize;
1009
1010 wipe_start = (char *)b + sizeof(struct bfhead);
1011 wipe_size = b->bh.bsize - sizeof(struct bfhead);
1012 }
1013
1014 /* Now we look at the next buffer in memory, located by advancing from
1015 the start of this buffer by its size, to see if that buffer is
1016 free. If it is, we combine this buffer with the next one in
1017 memory, dechaining the second buffer from the free list. */
1018
1019 bn = BFH(((char *) b) + b->bh.bsize);
1020 if (bn->bh.bsize > 0) {
1021
1022 /* The buffer is free. Remove it from the free list and add
1023 its size to that of our buffer. */
1024
1025 assert(BH((char *) bn + bn->bh.bsize)->prevfree == bn->bh.bsize);
1026 assert(bn->ql.blink->ql.flink == bn);
1027 assert(bn->ql.flink->ql.blink == bn);
1028 bn->ql.blink->ql.flink = bn->ql.flink;
1029 bn->ql.flink->ql.blink = bn->ql.blink;
1030 b->bh.bsize += bn->bh.bsize;
1031
1032 /* Finally, advance to the buffer that follows the newly
1033 consolidated free block. We must set its backpointer to the
1034 head of the consolidated free block. We know the next block
1035 must be an allocated block because the process of recombination
1036 guarantees that two free blocks will never be contiguous in
1037 memory. */
1038
1039 bn = BFH(((char *) b) + b->bh.bsize);
1040 /* Only bfhead of next buffer needs to be wiped */
1041 wipe_size += sizeof(struct bfhead);
1042 }
1043 if (wipe) {
1044 V memset_unchecked(wipe_start, 0x55, wipe_size);
1045 }
1046 assert(bn->bh.bsize < 0);
1047
1048 /* The next buffer is allocated. Set the backpointer in it to point
1049 to this buffer; the previous free buffer in memory. */
1050
1051 bn->bh.prevfree = b->bh.bsize;
1052
1053 #ifdef BECtl
1054
1055 /* If a block-release function is defined, and this free buffer
1056 constitutes the entire block, release it. Note that pool_len
1057 is defined in such a way that the test will fail unless all
1058 pool blocks are the same size. */
1059
1060 if (poolset->relfcn != NULL &&
1061 ((bufsize) b->bh.bsize) == (pool_len - sizeof(struct bhead))) {
1062
1063 assert(b->bh.prevfree == 0);
1064 assert(BH((char *) b + b->bh.bsize)->bsize == ESent);
1065 assert(BH((char *) b + b->bh.bsize)->prevfree == b->bh.bsize);
1066 /* Unlink the buffer from the free list */
1067 b->ql.blink->ql.flink = b->ql.flink;
1068 b->ql.flink->ql.blink = b->ql.blink;
1069
1070 poolset->relfcn(b);
1071 #ifdef BufStats
1072 poolset->numprel++; /* Nr of expansion block releases */
1073 poolset->numpblk--; /* Total number of blocks */
1074 assert(numpblk == numpget - numprel);
1075 #endif /* BufStats */
1076 }
1077 #endif /* BECtl */
1078 }
1079
1080 #ifdef BECtl
1081
1082 /* BECTL -- Establish automatic pool expansion control */
1083
1084 void bectl(compact, acquire, release, pool_incr, poolset)
1085 int (*compact) _((bufsize sizereq, int sequence));
1086 void *(*acquire) _((bufsize size));
1087 void (*release) _((void *buf));
1088 bufsize pool_incr;
1089 struct bpoolset *poolset;
1090 {
1091 poolset->compfcn = compact;
1092 poolset->acqfcn = acquire;
1093 poolset->relfcn = release;
1094 poolset->exp_incr = pool_incr;
1095 }
1096 #endif
1097
1098 /* BPOOL -- Add a region of memory to the buffer pool. */
1099
bpool(buf,len,poolset)1100 void bpool(buf, len, poolset)
1101 void *buf;
1102 bufsize len;
1103 struct bpoolset *poolset;
1104 {
1105 struct bfhead *b = BFH(buf);
1106 struct bhead *bn;
1107
1108 #ifdef SizeQuant
1109 len &= ~(SizeQuant - 1);
1110 #endif
1111 #ifdef BECtl
1112 if (poolset->pool_len == 0) {
1113 pool_len = len;
1114 } else if (len != poolset->pool_len) {
1115 poolset->pool_len = -1;
1116 }
1117 #ifdef BufStats
1118 poolset->numpget++; /* Number of block acquisitions */
1119 poolset->numpblk++; /* Number of blocks total */
1120 assert(poolset->numpblk == poolset->numpget - poolset->numprel);
1121 #endif /* BufStats */
1122 #endif /* BECtl */
1123
1124 /* Since the block is initially occupied by a single free buffer,
1125 it had better not be (much) larger than the largest buffer
1126 whose size we can store in bhead.bsize. */
1127
1128 assert(len - sizeof(struct bhead) <= -((bufsize) ESent + 1));
1129
1130 /* Clear the backpointer at the start of the block to indicate that
1131 there is no free block prior to this one. That blocks
1132 recombination when the first block in memory is released. */
1133
1134 b->bh.prevfree = 0;
1135
1136 /* Chain the new block to the free list. */
1137
1138 assert(poolset->freelist.ql.blink->ql.flink == &poolset->freelist);
1139 assert(poolset->freelist.ql.flink->ql.blink == &poolset->freelist);
1140 b->ql.flink = &poolset->freelist;
1141 b->ql.blink = poolset->freelist.ql.blink;
1142 poolset->freelist.ql.blink = b;
1143 b->ql.blink->ql.flink = b;
1144
1145 /* Create a dummy allocated buffer at the end of the pool. This dummy
1146 buffer is seen when a buffer at the end of the pool is released and
1147 blocks recombination of the last buffer with the dummy buffer at
1148 the end. The length in the dummy buffer is set to the largest
1149 negative number to denote the end of the pool for diagnostic
1150 routines (this specific value is not counted on by the actual
1151 allocation and release functions). */
1152
1153 len -= sizeof(struct bhead);
1154 b->bh.bsize = (bufsize) len;
1155 #ifdef FreeWipe
1156 V memset_unchecked(((char *) b) + sizeof(struct bfhead), 0x55,
1157 (MemSize) (len - sizeof(struct bfhead)));
1158 #endif
1159 bn = BH(((char *) b) + len);
1160 bn->prevfree = (bufsize) len;
1161 /* Definition of ESent assumes two's complement! */
1162 assert((~0) == -1);
1163 bn->bsize = ESent;
1164 }
1165
1166 #ifdef BufStats
1167
1168 /* BSTATS -- Return buffer allocation free space statistics. */
1169
bstats(curalloc,totfree,maxfree,nget,nrel,poolset)1170 void bstats(curalloc, totfree, maxfree, nget, nrel, poolset)
1171 bufsize *curalloc, *totfree, *maxfree;
1172 long *nget, *nrel;
1173 struct bpoolset *poolset;
1174 {
1175 struct bfhead *b = poolset->freelist.ql.flink;
1176
1177 *nget = poolset->numget;
1178 *nrel = poolset->numrel;
1179 *curalloc = poolset->totalloc;
1180 *totfree = 0;
1181 *maxfree = -1;
1182 while (b != &poolset->freelist) {
1183 assert(b->bh.bsize > 0);
1184 *totfree += b->bh.bsize;
1185 if (b->bh.bsize > *maxfree) {
1186 *maxfree = b->bh.bsize;
1187 }
1188 b = b->ql.flink; /* Link to next buffer */
1189 }
1190 }
1191
1192 #ifdef BECtl
1193
1194 /* BSTATSE -- Return extended statistics */
1195
bstatse(pool_incr,npool,npget,nprel,ndget,ndrel,poolset)1196 void bstatse(pool_incr, npool, npget, nprel, ndget, ndrel, poolset)
1197 bufsize *pool_incr;
1198 long *npool, *npget, *nprel, *ndget, *ndrel;
1199 struct bpoolset *poolset;
1200 {
1201 *pool_incr = (poolset->pool_len < 0) ?
1202 -poolset->exp_incr : poolset->exp_incr;
1203 *npool = poolset->numpblk;
1204 *npget = poolset->numpget;
1205 *nprel = poolset->numprel;
1206 *ndget = poolset->numdget;
1207 *ndrel = poolset->numdrel;
1208 }
1209 #endif /* BECtl */
1210 #endif /* BufStats */
1211
1212 #ifdef DumpData
1213
1214 /* BUFDUMP -- Dump the data in a buffer. This is called with the user
1215 data pointer, and backs up to the buffer header. It will
1216 dump either a free block or an allocated one. */
1217
bufdump(buf)1218 void bufdump(buf)
1219 void *buf;
1220 {
1221 struct bfhead *b;
1222 unsigned char *bdump;
1223 bufsize bdlen;
1224
1225 b = BFH(((char *) buf) - sizeof(struct bhead));
1226 assert(b->bh.bsize != 0);
1227 if (b->bh.bsize < 0) {
1228 bdump = (unsigned char *) buf;
1229 bdlen = (-b->bh.bsize) - sizeof(struct bhead);
1230 } else {
1231 bdump = (unsigned char *) (((char *) b) + sizeof(struct bfhead));
1232 bdlen = b->bh.bsize - sizeof(struct bfhead);
1233 }
1234
1235 while (bdlen > 0) {
1236 int i, dupes = 0;
1237 bufsize l = bdlen;
1238 char bhex[50], bascii[20];
1239
1240 if (l > 16) {
1241 l = 16;
1242 }
1243
1244 for (i = 0; i < l; i++) {
1245 V snprintf(bhex + i * 3, sizeof(bhex) - i * 3, "%02X ",
1246 bdump[i]);
1247 bascii[i] = isprint(bdump[i]) ? bdump[i] : ' ';
1248 }
1249 bascii[i] = 0;
1250 V printf("%-48s %s\n", bhex, bascii);
1251 bdump += l;
1252 bdlen -= l;
1253 while ((bdlen > 16) && (memcmp((char *) (bdump - 16),
1254 (char *) bdump, 16) == 0)) {
1255 dupes++;
1256 bdump += 16;
1257 bdlen -= 16;
1258 }
1259 if (dupes > 1) {
1260 V printf(
1261 " (%d lines [%d bytes] identical to above line skipped)\n",
1262 dupes, dupes * 16);
1263 } else if (dupes == 1) {
1264 bdump -= 16;
1265 bdlen += 16;
1266 }
1267 }
1268 }
1269 #endif
1270
1271 #ifdef BufDump
1272
1273 /* BPOOLD -- Dump a buffer pool. The buffer headers are always listed.
1274 If DUMPALLOC is nonzero, the contents of allocated buffers
1275 are dumped. If DUMPFREE is nonzero, free blocks are
1276 dumped as well. If FreeWipe checking is enabled, free
1277 blocks which have been clobbered will always be dumped. */
1278
bpoold(buf,dumpalloc,dumpfree)1279 void bpoold(buf, dumpalloc, dumpfree)
1280 void *buf;
1281 int dumpalloc, dumpfree;
1282 {
1283 struct bfhead *b = BFH(buf);
1284
1285 while (b->bh.bsize != ESent) {
1286 bufsize bs = b->bh.bsize;
1287
1288 if (bs < 0) {
1289 bs = -bs;
1290 V printf("Allocated buffer: size %6ld bytes.\n", (long) bs);
1291 if (dumpalloc) {
1292 bufdump((void *) (((char *) b) + sizeof(struct bhead)));
1293 }
1294 } else {
1295 char *lerr = "";
1296
1297 assert(bs > 0);
1298 if ((b->ql.blink->ql.flink != b) ||
1299 (b->ql.flink->ql.blink != b)) {
1300 lerr = " (Bad free list links)";
1301 }
1302 V printf("Free block: size %6ld bytes.%s\n",
1303 (long) bs, lerr);
1304 #ifdef FreeWipe
1305 lerr = ((char *) b) + sizeof(struct bfhead);
1306 if ((bs > sizeof(struct bfhead)) && ((*lerr != 0x55) ||
1307 (memcmp(lerr, lerr + 1,
1308 (MemSize) (bs - (sizeof(struct bfhead) + 1))) != 0))) {
1309 V printf(
1310 "(Contents of above free block have been overstored.)\n");
1311 bufdump((void *) (((char *) b) + sizeof(struct bhead)));
1312 } else
1313 #endif
1314 if (dumpfree) {
1315 bufdump((void *) (((char *) b) + sizeof(struct bhead)));
1316 }
1317 }
1318 b = BFH(((char *) b) + bs);
1319 }
1320 }
1321 #endif /* BufDump */
1322
1323 #ifdef BufValid
1324
1325 /* BPOOLV -- Validate a buffer pool. If NDEBUG isn't defined,
1326 any error generates an assertion failure. */
1327
bpoolv(buf)1328 int bpoolv(buf)
1329 void *buf;
1330 {
1331 struct bfhead *b = BFH(buf);
1332
1333 while (b->bh.bsize != ESent) {
1334 bufsize bs = b->bh.bsize;
1335
1336 if (bs < 0) {
1337 bs = -bs;
1338 } else {
1339 const char *lerr = "";
1340
1341 assert(bs > 0);
1342 if (bs <= 0) {
1343 return 0;
1344 }
1345 if ((b->ql.blink->ql.flink != b) ||
1346 (b->ql.flink->ql.blink != b)) {
1347 V printf("Free block: size %6ld bytes. (Bad free list links)\n",
1348 (long) bs);
1349 assert(0);
1350 return 0;
1351 }
1352 #ifdef FreeWipe
1353 lerr = ((char *) b) + sizeof(struct bfhead);
1354 if ((bs > sizeof(struct bfhead)) && ((*lerr != 0x55) ||
1355 (memcmp(lerr, lerr + 1,
1356 (MemSize) (bs - (sizeof(struct bfhead) + 1))) != 0))) {
1357 V printf(
1358 "(Contents of above free block have been overstored.)\n");
1359 bufdump((void *) (((char *) b) + sizeof(struct bhead)));
1360 assert(0);
1361 return 0;
1362 }
1363 #endif
1364 }
1365 b = BFH(((char *) b) + bs);
1366 }
1367 return 1;
1368 }
1369 #endif /* BufValid */
1370
1371 /***********************\
1372 * *
1373 * Built-in test program *
1374 * *
1375 \***********************/
1376
1377 #if !defined(__KERNEL__) && !defined(__LDELF__) && defined(CFG_TA_BGET_TEST)
1378
1379 #define TestProg 20000
1380
1381 #ifdef BECtl
1382 #define PoolSize 300000 /* Test buffer pool size */
1383 #else
1384 #define PoolSize 50000 /* Test buffer pool size */
1385 #endif
1386 #define ExpIncr 32768 /* Test expansion block size */
1387 #define CompactTries 10 /* Maximum tries at compacting */
1388
1389 #define dumpAlloc 0 /* Dump allocated buffers ? */
1390 #define dumpFree 0 /* Dump free buffers ? */
1391
1392 static char *bchain = NULL; /* Our private buffer chain */
1393 static char *bp = NULL; /* Our initial buffer pool */
1394
1395 #ifdef UsingFloat
1396 #include <math.h>
1397 #endif
1398
1399 static unsigned long int next = 1;
1400
1401 static void *(*mymalloc)(size_t size);
1402 static void (*myfree)(void *ptr);
1403
1404 static struct bpoolset mypoolset = {
1405 .freelist = {
1406 .bh = { 0, 0},
1407 .ql = { &mypoolset.freelist, &mypoolset.freelist},
1408 }
1409 };
1410
1411 /* Return next random integer */
1412
myrand(void)1413 static int myrand(void)
1414 {
1415 next = next * 1103515245L + 12345;
1416 return (unsigned int) (next / 65536L) % 32768L;
1417 }
1418
1419 /* Set seed for random generator */
1420
mysrand(unsigned int seed)1421 static void mysrand(unsigned int seed)
1422 {
1423 next = seed;
1424 }
1425
1426 /* STATS -- Edit statistics returned by bstats() or bstatse(). */
1427
stats(const char * when __maybe_unused,struct bpoolset * poolset __maybe_unused)1428 static void stats(const char *when __maybe_unused,
1429 struct bpoolset *poolset __maybe_unused)
1430 {
1431 #ifdef BufStats
1432 bufsize cural, totfree, maxfree;
1433 long nget, nfree;
1434 #endif
1435 #ifdef BECtl
1436 bufsize pincr;
1437 long totblocks, npget, nprel, ndget, ndrel;
1438 #endif
1439
1440 #ifdef BufStats
1441 bstats(&cural, &totfree, &maxfree, &nget, &nfree, poolset);
1442 V printf(
1443 "%s: %ld gets, %ld releases. %ld in use, %ld free, largest = %ld\n",
1444 when, nget, nfree, (long) cural, (long) totfree, (long) maxfree);
1445 #endif
1446 #ifdef BECtl
1447 bstatse(&pincr, &totblocks, &npget, &nprel, &ndget, &ndrel, poolset);
1448 V printf(
1449 " Blocks: size = %ld, %ld (%ld bytes) in use, %ld gets, %ld frees\n",
1450 (long)pincr, totblocks, pincr * totblocks, npget, nprel);
1451 V printf(" %ld direct gets, %ld direct frees\n", ndget, ndrel);
1452 #endif /* BECtl */
1453 }
1454
1455 #ifdef BECtl
1456 static int protect = 0; /* Disable compaction during bgetr() */
1457
1458 /* BCOMPACT -- Compaction call-back function. */
1459
bcompact(bsize,seq)1460 static int bcompact(bsize, seq)
1461 bufsize bsize;
1462 int seq;
1463 {
1464 #ifdef CompactTries
1465 char *bc = bchain;
1466 int i = myrand() & 0x3;
1467
1468 #ifdef COMPACTRACE
1469 V printf("Compaction requested. %ld bytes needed, sequence %d.\n",
1470 (long) bsize, seq);
1471 #endif
1472
1473 if (protect || (seq > CompactTries)) {
1474 #ifdef COMPACTRACE
1475 V printf("Compaction gave up.\n");
1476 #endif
1477 return 0;
1478 }
1479
1480 /* Based on a random cast, release a random buffer in the list
1481 of allocated buffers. */
1482
1483 while (i > 0 && bc != NULL) {
1484 bc = *((char **) bc);
1485 i--;
1486 }
1487 if (bc != NULL) {
1488 char *fb;
1489
1490 fb = *((char **) bc);
1491 if (fb != NULL) {
1492 *((char **) bc) = *((char **) fb);
1493 brel((void *) fb);
1494 return 1;
1495 }
1496 }
1497
1498 #ifdef COMPACTRACE
1499 V printf("Compaction bailed out.\n");
1500 #endif
1501 #endif /* CompactTries */
1502 return 0;
1503 }
1504
1505 /* BEXPAND -- Expand pool call-back function. */
1506
bexpand(size)1507 static void *bexpand(size)
1508 bufsize size;
1509 {
1510 void *np = NULL;
1511 bufsize cural, totfree, maxfree;
1512 long nget, nfree;
1513
1514 /* Don't expand beyond the total allocated size given by PoolSize. */
1515
1516 bstats(&cural, &totfree, &maxfree, &nget, &nfree);
1517
1518 if (cural < PoolSize) {
1519 np = (void *) mymalloc((unsigned) size);
1520 }
1521 #ifdef EXPTRACE
1522 V printf("Expand pool by %ld -- %s.\n", (long) size,
1523 np == NULL ? "failed" : "succeeded");
1524 #endif
1525 return np;
1526 }
1527
1528 /* BSHRINK -- Shrink buffer pool call-back function. */
1529
bshrink(buf)1530 static void bshrink(buf)
1531 void *buf;
1532 {
1533 if (((char *) buf) == bp) {
1534 #ifdef EXPTRACE
1535 V printf("Initial pool released.\n");
1536 #endif
1537 bp = NULL;
1538 }
1539 #ifdef EXPTRACE
1540 V printf("Shrink pool.\n");
1541 #endif
1542 myfree((char *) buf);
1543 }
1544
1545 #endif /* BECtl */
1546
1547 /* Restrict buffer requests to those large enough to contain our pointer and
1548 small enough for the CPU architecture. */
1549
blimit(bufsize bs)1550 static bufsize blimit(bufsize bs)
1551 {
1552 if (bs < sizeof(char *)) {
1553 bs = sizeof(char *);
1554 }
1555
1556 /* This is written out in this ugly fashion because the
1557 cool expression in sizeof(int) that auto-configured
1558 to any length int befuddled some compilers. */
1559
1560 if (sizeof(int) == 2) {
1561 if (bs > 32767) {
1562 bs = 32767;
1563 }
1564 } else {
1565 if (bs > 200000) {
1566 bs = 200000;
1567 }
1568 }
1569 return bs;
1570 }
1571
bget_main_test(void * (* malloc_func)(size_t),void (* free_func)(void *))1572 int bget_main_test(void *(*malloc_func)(size_t), void (*free_func)(void *))
1573 {
1574 int i;
1575 #ifdef UsingFloat
1576 double x;
1577 #endif
1578
1579 mymalloc = malloc_func;
1580 myfree = free_func;
1581
1582 /* Seed the random number generator. If Repeatable is defined, we
1583 always use the same seed. Otherwise, we seed from the clock to
1584 shake things up from run to run. */
1585
1586 mysrand(1234);
1587
1588 /* Compute x such that pow(x, p) ranges between 1 and 4*ExpIncr as
1589 p ranges from 0 to ExpIncr-1, with a concentration in the lower
1590 numbers. */
1591
1592 #ifdef UsingFloat
1593 x = 4.0 * ExpIncr;
1594 x = log(x);
1595 x = exp(log(4.0 * ExpIncr) / (ExpIncr - 1.0));
1596 #endif
1597
1598 #ifdef BECtl
1599 bectl(bcompact, bexpand, bshrink, (bufsize) ExpIncr, &mypoolset);
1600 bp = mymalloc(ExpIncr);
1601 assert(bp != NULL);
1602 bpool((void *) bp, (bufsize) ExpIncr);
1603 #else
1604 bp = mymalloc(PoolSize);
1605 assert(bp != NULL);
1606 bpool((void *) bp, (bufsize) PoolSize, &mypoolset);
1607 #endif
1608
1609 stats("Create pool", &mypoolset);
1610 #ifdef BufValid
1611 V bpoolv((void *) bp);
1612 #endif
1613 #ifdef BufDump
1614 bpoold((void *) bp, dumpAlloc, dumpFree);
1615 #endif
1616
1617 for (i = 0; i < TestProg; i++) {
1618 char *cb;
1619 #ifdef UsingFloat
1620 bufsize bs = pow(x, (double) (myrand() & (ExpIncr - 1)));
1621 #else
1622 bufsize bs = (myrand() & (ExpIncr * 4 - 1)) / (1 << (myrand() & 0x7));
1623 #endif
1624 bufsize align = 0;
1625 bufsize hdr_size = 0;
1626
1627 switch (rand() & 0x3) {
1628 case 1:
1629 align = 32;
1630 break;
1631 case 2:
1632 align = 64;
1633 break;
1634 case 3:
1635 align = 128;
1636 break;
1637 default:
1638 break;
1639 }
1640
1641 hdr_size = (rand() & 0x3) * BGET_HDR_QUANTUM;
1642
1643 assert(bs <= (((bufsize) 4) * ExpIncr));
1644 bs = blimit(bs);
1645 if (myrand() & 0x400) {
1646 cb = (char *) bgetz(align, hdr_size, bs, &mypoolset);
1647 } else {
1648 cb = (char *) bget(align, hdr_size, bs, &mypoolset);
1649 }
1650 if (cb == NULL) {
1651 #ifdef EasyOut
1652 break;
1653 #else
1654 char *bc = bchain;
1655
1656 if (bc != NULL) {
1657 char *fb;
1658
1659 fb = *((char **) bc);
1660 if (fb != NULL) {
1661 *((char **) bc) = *((char **) fb);
1662 brel((void *) fb, &mypoolset, true/*wipe*/);
1663 }
1664 }
1665 continue;
1666 #endif
1667 }
1668 assert(!align || !(((unsigned long)cb + hdr_size) & (align - 1)));
1669 *((char **) cb) = (char *) bchain;
1670 bchain = cb;
1671
1672 /* Based on a random cast, release a random buffer in the list
1673 of allocated buffers. */
1674
1675 if ((myrand() & 0x10) == 0) {
1676 char *bc = bchain;
1677 int j = myrand() & 0x3;
1678
1679 while (j > 0 && bc != NULL) {
1680 bc = *((char **) bc);
1681 j--;
1682 }
1683 if (bc != NULL) {
1684 char *fb;
1685
1686 fb = *((char **) bc);
1687 if (fb != NULL) {
1688 *((char **) bc) = *((char **) fb);
1689 brel((void *) fb, &mypoolset, true/*wipe*/);
1690 }
1691 }
1692 }
1693
1694 /* Based on a random cast, reallocate a random buffer in the list
1695 to a random size */
1696
1697 if ((myrand() & 0x20) == 0) {
1698 char *bc = bchain;
1699 int j = myrand() & 0x3;
1700
1701 while (j > 0 && bc != NULL) {
1702 bc = *((char **) bc);
1703 j--;
1704 }
1705 if (bc != NULL) {
1706 char *fb;
1707
1708 fb = *((char **) bc);
1709 if (fb != NULL) {
1710 char *newb;
1711
1712 #ifdef UsingFloat
1713 bs = pow(x, (double) (myrand() & (ExpIncr - 1)));
1714 #else
1715 bs = (rand() & (ExpIncr * 4 - 1)) / (1 << (rand() & 0x7));
1716 #endif
1717 bs = blimit(bs);
1718 #ifdef BECtl
1719 protect = 1; /* Protect against compaction */
1720 #endif
1721 newb = (char *) bgetr((void *) fb, align, hdr_size, bs, &mypoolset);
1722 #ifdef BECtl
1723 protect = 0;
1724 #endif
1725 if (newb != NULL) {
1726 assert(!align || !(((unsigned long)newb + hdr_size) &
1727 (align - 1)));
1728 *((char **) bc) = newb;
1729 }
1730 }
1731 }
1732 }
1733 }
1734 stats("\nAfter allocation", &mypoolset);
1735 if (bp != NULL) {
1736 #ifdef BufValid
1737 V bpoolv((void *) bp);
1738 #endif
1739 #ifdef BufDump
1740 bpoold((void *) bp, dumpAlloc, dumpFree);
1741 #endif
1742 }
1743
1744 while (bchain != NULL) {
1745 char *buf = bchain;
1746
1747 bchain = *((char **) buf);
1748 brel((void *) buf, &mypoolset, true/*wipe*/);
1749 }
1750 stats("\nAfter release", &mypoolset);
1751 #ifndef BECtl
1752 if (bp != NULL) {
1753 #ifdef BufValid
1754 V bpoolv((void *) bp);
1755 #endif
1756 #ifdef BufDump
1757 bpoold((void *) bp, dumpAlloc, dumpFree);
1758 #endif
1759 }
1760 #endif
1761
1762 return 0;
1763 }
1764 #endif
1765