<feed xmlns='http://www.w3.org/2005/Atom'>
<title>sciteco/src/memory.cpp, branch master-fmsbw-ci</title>
<subtitle>Scintilla-based Text Editor and COrrector</subtitle>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/'/>
<entry>
<title>THE GREAT CEEIFICATION EVENT</title>
<updated>2021-05-30T01:12:56+00:00</updated>
<author>
<name>Robin Haberkorn</name>
<email>robin.haberkorn@googlemail.com</email>
</author>
<published>2021-05-30T00:38:43+00:00</published>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/commit/?id=432ad24e382681f1c13b07e8486e91063dd96e2e'/>
<id>432ad24e382681f1c13b07e8486e91063dd96e2e</id>
<content type='text'>
This is a total conversion of SciTECO to plain C (GNU C11).
The chance was taken to improve a lot of internal datastructures,
fix fundamental bugs and lay the foundations of future features.
The GTK user interface is now in an useable state!
All changes have been squashed together.

The language itself has almost not changed at all, except for:

* Detection of string terminators (usually Escape) now takes
  the string building characters into account.
  A string is only terminated outside of string building characters.
  In other words, you can now for instance write
  I^EQ[Hello$world]$
  This removes one of the last bits of shellisms which is out of
  place in SciTECO where no tokenization/lexing is performed.
  Consequently, the current termination character can also be
  escaped using ^Q/^R.
  This is used by auto completions to make sure that strings
  are inserted verbatim and without unwanted sideeffects.
* All strings can now safely contain null-characters
  (see also: 8-bit cleanliness).
  The null-character itself (^@) is not (yet) a valid SciTECO
  command, though.

An incomplete list of changes:

* We got rid of the BSD headers for RB trees and lists/queues.
  The problem with them was that they used a form of metaprogramming
  only to gain a bit of type safety. It also resulted in less
  readble code. This was a C++ desease.
  The new code avoids metaprogramming only to gain type safety.
  The BSD tree.h has been replaced by rb3ptr by Jens Stimpfle
  (https://github.com/jstimpfle/rb3ptr).
  This implementation is also more memory efficient than BSD's.
  The BSD list.h and queue.h has been replaced with a custom
  src/list.h.
* Fixed crashes, performance issues and compatibility issues with
  the Gtk 3 User Interface.
  It is now more or less ready for general use.
  The GDK lock is no longer used to avoid using deprecated functions.
  On the downside, the new implementation (driving the Gtk event loop
  stepwise) is even slower than the old one.
  A few glitches remain (see TODO), but it is hoped that they will
  be resolved by the Scintilla update which will be performed soon.
* A lot of program units have been split up, so they are shorter
  and easier to maintain: core-commands.c, qreg-commands.c,
  goto-commands.c, file-utils.h.
* Parser states are simply structs of callbacks now.
  They still use a kind of polymorphy using a preprocessor trick.
  TECO_DEFINE_STATE() takes an initializer list that will be
  merged with the default list of field initializers.
  To "subclass" states, you can simply define new macros that add
  initializers to existing macros.
* Parsers no longer have a "transitions" table but the input_cb()
  may use switch-case statements.
  There are also teco_machine_main_transition_t now which can
  be used to implement simple transitions. Additionally, you
  can specify functions to execute during transitions.
  This largely avoids long switch-case-statements.
* Parsers are embeddable/reusable now, at least in parse-only mode.
  This does not currently bring any advantages but may later
  be used to write a Scintilla lexer for TECO syntax highlighting.
  Once parsers are fully embeddable, it will also be possible
  to run TECO macros in a kind of coroutine which would allow
  them to process string arguments in real time.
* undo.[ch] still uses metaprogramming extensively but via
  the C preprocessor of course. On the downside, most undo
  token generators must be initiated explicitly (theoretically
  we could have used embedded functions / trampolines to
  instantiate automatically but this has turned out to be
  dangereous).
  There is a TECO_DEFINE_UNDO_CALL() to generate closures for
  arbitrary functions now (ie. to call an arbitrary function
  at undo-time). This simplified a lot of code and is much
  shorter than manually pushing undo tokens in many cases.
* Instead of the ridiculous C++ Curiously Recurring Template
  Pattern to achieve static polymorphy for user interface
  implementations, we now simply declare all functions to
  implement in interface.h and link in the implementations.
  This is possible since we no longer hace to define
  interface subclasses (all state is static variables in
  the interface's *.c files).
* Headers are now significantly shorter than in C++ since
  we can often hide more of our "class" implementations.
* Memory counting is based on dlmalloc for most platforms now.
  Unfortunately, there is no malloc implementation that
  provides an efficient constant-time memory counter that
  is guaranteed to decrease when freeing memory.
  But since we use a defined malloc implementation now,
  malloc_usable_size() can be used safely for tracking memory use.
  malloc() replacement is very tricky on Windows, so we
  use a poll thread on Windows. This can also be enabled
  on other supported platforms using --disable-malloc-replacement.
  All in all, I'm still not pleased with the state of memory
  limiting. It is a mess.
* Error handling uses GError now. This has the advantage that
  the GError codes can be reused once we support error catching
  in the SciTECO language.
* Added a few more test suite cases.
* Haiku is no longer supported as builds are instable and
  I did not manage to debug them - quite possibly Haiku bugs
  were responsible.
* Glib v2.44 or later are now required.
  The GTK UI requires Gtk+ v3.12 or later now.
  The GtkFlowBox fallback and sciteco-wrapper workaround are
  no longer required.
* We now extensively use the GCC/Clang-specific g_auto
  feature (automatic deallocations when leaving the current
  code block).
* Updated copyright to 2021.
  SciTECO has been in continuous development, even though there
  have been no commits since 2018.
* Since these changes are so significant, the target release has
  been set to v2.0.
  It is planned that beginning with v3.0, the language will be
  kept stable.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is a total conversion of SciTECO to plain C (GNU C11).
The chance was taken to improve a lot of internal datastructures,
fix fundamental bugs and lay the foundations of future features.
The GTK user interface is now in an useable state!
All changes have been squashed together.

The language itself has almost not changed at all, except for:

* Detection of string terminators (usually Escape) now takes
  the string building characters into account.
  A string is only terminated outside of string building characters.
  In other words, you can now for instance write
  I^EQ[Hello$world]$
  This removes one of the last bits of shellisms which is out of
  place in SciTECO where no tokenization/lexing is performed.
  Consequently, the current termination character can also be
  escaped using ^Q/^R.
  This is used by auto completions to make sure that strings
  are inserted verbatim and without unwanted sideeffects.
* All strings can now safely contain null-characters
  (see also: 8-bit cleanliness).
  The null-character itself (^@) is not (yet) a valid SciTECO
  command, though.

An incomplete list of changes:

* We got rid of the BSD headers for RB trees and lists/queues.
  The problem with them was that they used a form of metaprogramming
  only to gain a bit of type safety. It also resulted in less
  readble code. This was a C++ desease.
  The new code avoids metaprogramming only to gain type safety.
  The BSD tree.h has been replaced by rb3ptr by Jens Stimpfle
  (https://github.com/jstimpfle/rb3ptr).
  This implementation is also more memory efficient than BSD's.
  The BSD list.h and queue.h has been replaced with a custom
  src/list.h.
* Fixed crashes, performance issues and compatibility issues with
  the Gtk 3 User Interface.
  It is now more or less ready for general use.
  The GDK lock is no longer used to avoid using deprecated functions.
  On the downside, the new implementation (driving the Gtk event loop
  stepwise) is even slower than the old one.
  A few glitches remain (see TODO), but it is hoped that they will
  be resolved by the Scintilla update which will be performed soon.
* A lot of program units have been split up, so they are shorter
  and easier to maintain: core-commands.c, qreg-commands.c,
  goto-commands.c, file-utils.h.
* Parser states are simply structs of callbacks now.
  They still use a kind of polymorphy using a preprocessor trick.
  TECO_DEFINE_STATE() takes an initializer list that will be
  merged with the default list of field initializers.
  To "subclass" states, you can simply define new macros that add
  initializers to existing macros.
* Parsers no longer have a "transitions" table but the input_cb()
  may use switch-case statements.
  There are also teco_machine_main_transition_t now which can
  be used to implement simple transitions. Additionally, you
  can specify functions to execute during transitions.
  This largely avoids long switch-case-statements.
* Parsers are embeddable/reusable now, at least in parse-only mode.
  This does not currently bring any advantages but may later
  be used to write a Scintilla lexer for TECO syntax highlighting.
  Once parsers are fully embeddable, it will also be possible
  to run TECO macros in a kind of coroutine which would allow
  them to process string arguments in real time.
* undo.[ch] still uses metaprogramming extensively but via
  the C preprocessor of course. On the downside, most undo
  token generators must be initiated explicitly (theoretically
  we could have used embedded functions / trampolines to
  instantiate automatically but this has turned out to be
  dangereous).
  There is a TECO_DEFINE_UNDO_CALL() to generate closures for
  arbitrary functions now (ie. to call an arbitrary function
  at undo-time). This simplified a lot of code and is much
  shorter than manually pushing undo tokens in many cases.
* Instead of the ridiculous C++ Curiously Recurring Template
  Pattern to achieve static polymorphy for user interface
  implementations, we now simply declare all functions to
  implement in interface.h and link in the implementations.
  This is possible since we no longer hace to define
  interface subclasses (all state is static variables in
  the interface's *.c files).
* Headers are now significantly shorter than in C++ since
  we can often hide more of our "class" implementations.
* Memory counting is based on dlmalloc for most platforms now.
  Unfortunately, there is no malloc implementation that
  provides an efficient constant-time memory counter that
  is guaranteed to decrease when freeing memory.
  But since we use a defined malloc implementation now,
  malloc_usable_size() can be used safely for tracking memory use.
  malloc() replacement is very tricky on Windows, so we
  use a poll thread on Windows. This can also be enabled
  on other supported platforms using --disable-malloc-replacement.
  All in all, I'm still not pleased with the state of memory
  limiting. It is a mess.
* Error handling uses GError now. This has the advantage that
  the GError codes can be reused once we support error catching
  in the SciTECO language.
* Added a few more test suite cases.
* Haiku is no longer supported as builds are instable and
  I did not manage to debug them - quite possibly Haiku bugs
  were responsible.
* Glib v2.44 or later are now required.
  The GTK UI requires Gtk+ v3.12 or later now.
  The GtkFlowBox fallback and sciteco-wrapper workaround are
  no longer required.
* We now extensively use the GCC/Clang-specific g_auto
  feature (automatic deallocations when leaving the current
  code block).
* Updated copyright to 2021.
  SciTECO has been in continuous development, even though there
  have been no commits since 2018.
* Since these changes are so significant, the target release has
  been set to v2.0.
  It is planned that beginning with v3.0, the language will be
  kept stable.
</pre>
</div>
</content>
</entry>
<entry>
<title>fixed memory leaks and memory measurement leaks by removing -fsized-deallocation</title>
<updated>2017-08-24T21:06:26+00:00</updated>
<author>
<name>Robin Haberkorn</name>
<email>robin.haberkorn@googlemail.com</email>
</author>
<published>2017-08-24T20:52:37+00:00</published>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/commit/?id=ba6ea2fd0c0559c6e8d8108bd25252ef7aab68d0'/>
<id>ba6ea2fd0c0559c6e8d8108bd25252ef7aab68d0</id>
<content type='text'>
 * Array allocations were not properly accounted since the compiler
   would call the replacement new() which assumes that it would
   always be called along with the replacement sized-deletion.
   This is not true for array new[] allocations resulting in
   a constant increase of memory_usage and unrecoverable situations.
   This problem however could be fixed in principle by avoiding
   memory counting for arrays or falling back to malloc_usable_size.
 * The bigger problem was that some STLs (new_allocator) are broken, calling the
   non-sized delete for regular new() calls which could in principle
   be matched by sized-delete.
   This is also the reason why I had to provide a non-sized
   delete replacement, which in reality intoduced memory leaks.
 * Since adding checks for the broken compiler versions or a configure-time
   check that tries to detect these broken systems seems tedious,
   I simply removed that optimization.
 * This means we always have to rely on malloc_usable_size() now
   for non-SciTECO-object memory measurement.
 * Perhaps in the future, there should be an option for allowing
   portable measurement at the cost of memory usage, by prefixing
   each memory chunk with the chunk size.
   Maintainers could then decide to optimize their build for "speed"
   at the cost of memory overhead.
 * Another solution to this non-ending odyssey might be to introduce
   our own allocator, replacing malloc(), and allowing our own
   precise measurements.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 * Array allocations were not properly accounted since the compiler
   would call the replacement new() which assumes that it would
   always be called along with the replacement sized-deletion.
   This is not true for array new[] allocations resulting in
   a constant increase of memory_usage and unrecoverable situations.
   This problem however could be fixed in principle by avoiding
   memory counting for arrays or falling back to malloc_usable_size.
 * The bigger problem was that some STLs (new_allocator) are broken, calling the
   non-sized delete for regular new() calls which could in principle
   be matched by sized-delete.
   This is also the reason why I had to provide a non-sized
   delete replacement, which in reality intoduced memory leaks.
 * Since adding checks for the broken compiler versions or a configure-time
   check that tries to detect these broken systems seems tedious,
   I simply removed that optimization.
 * This means we always have to rely on malloc_usable_size() now
   for non-SciTECO-object memory measurement.
 * Perhaps in the future, there should be an option for allowing
   portable measurement at the cost of memory usage, by prefixing
   each memory chunk with the chunk size.
   Maintainers could then decide to optimize their build for "speed"
   at the cost of memory overhead.
 * Another solution to this non-ending odyssey might be to introduce
   our own allocator, replacing malloc(), and allowing our own
   precise measurements.
</pre>
</div>
</content>
</entry>
<entry>
<title>define non-sized deallocator and memory counting debugging</title>
<updated>2017-04-30T02:26:43+00:00</updated>
<author>
<name>Robin Haberkorn</name>
<email>robin.haberkorn@googlemail.com</email>
</author>
<published>2017-04-30T02:26:43+00:00</published>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/commit/?id=8d313963e7680d1dadd7fd6a3c271c2792ffe509'/>
<id>8d313963e7680d1dadd7fd6a3c271c2792ffe509</id>
<content type='text'>
 * it turned out to be possible to provoke memory_usage
   overflows or underruns, resulting in unrecoverable states
 * a possible reason can be that at least with G++ 5.4.0,
   the compiler would sometimes call the (default) non-sized
   delete followed by our custom sized delete/deallocator.
 * This was true even after compiling Scintilla with -fsized-deallocation.
 * therefore we provide an empty non-sized delete now.
 * memory_usage counting can now be debugged by uncommenting
   DEBUG_MAGIC in memory.cpp. This uses a magic value to detect
   instrumented allocations being mixed with non-instrumented
   allocations.
 * simplified the global sized-deallocation functions
   (they are identical to the Object-class allocators).
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 * it turned out to be possible to provoke memory_usage
   overflows or underruns, resulting in unrecoverable states
 * a possible reason can be that at least with G++ 5.4.0,
   the compiler would sometimes call the (default) non-sized
   delete followed by our custom sized delete/deallocator.
 * This was true even after compiling Scintilla with -fsized-deallocation.
 * therefore we provide an empty non-sized delete now.
 * memory_usage counting can now be debugged by uncommenting
   DEBUG_MAGIC in memory.cpp. This uses a magic value to detect
   instrumented allocations being mixed with non-instrumented
   allocations.
 * simplified the global sized-deallocation functions
   (they are identical to the Object-class allocators).
</pre>
</div>
</content>
</entry>
<entry>
<title>yet another revision of memory limiting: the glibc mallinfo() approach has been shown to be unacceptably broken, so the fallback implementation has been improved</title>
<updated>2017-03-08T11:55:06+00:00</updated>
<author>
<name>Robin Haberkorn</name>
<email>robin.haberkorn@googlemail.com</email>
</author>
<published>2017-03-08T11:00:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/commit/?id=152397e641e9d1e6a11f80e24f562c4cf2472a2f'/>
<id>152397e641e9d1e6a11f80e24f562c4cf2472a2f</id>
<content type='text'>
 * mallinfo() is not only broken on 64-bit systems but slows things
   down linearilly to the memory size of the process.
   E.g. after 500000&lt;%A&gt;, SciTECO will act sluggish! Shutting down
   afterwards can take minutes...
   mallinfo() was thus finally discarded as a memory measurement
   technique.
 * Evaluating /proc/self/statm? has also been evaluated and discarded
   because doing this frequently is even slower.
 * Instead, the fallback implementation has been drastically improved:
   * If possible use C++14 global sized deallocators, allowing memory measurements
     across the entire C++ code base with minimal runtime overhead.
     Since we only depend on C++11, a lengthy Autoconf check had to be introduced.
   * Use malloc_usable_size() with global non-sized deallocators to
     measure the approx. memory usage of the entire process (at least
     the ones done via C++).
     The cheaper C++11 sized deallocators implemented via SciTECO::Object still
     have precedence, so this affects Scintilla code only.
 * With both improvements the test case
   sciteco -e '&lt;@EU[X^E\a]"^E\a"%a&gt;'
   is handled sufficiently well now on glibc and performance is much better
   now.
 * The jemalloc-specific technique has been removed since it no longer
   brings any benefits compared to the improved fallback technique.
   Even the case of using malloc_usable_size() in strict C++ mode is
   up to 3 times faster.
 * The new fallback implementation might actually be good enough for
   Windows as well if some MSVCRT-specific support is added, like
   using _msize() instead of malloc_usable_size().
   This must be tested and benchmarked, so we keep the Windows-specific
   implementation for the time being.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 * mallinfo() is not only broken on 64-bit systems but slows things
   down linearilly to the memory size of the process.
   E.g. after 500000&lt;%A&gt;, SciTECO will act sluggish! Shutting down
   afterwards can take minutes...
   mallinfo() was thus finally discarded as a memory measurement
   technique.
 * Evaluating /proc/self/statm? has also been evaluated and discarded
   because doing this frequently is even slower.
 * Instead, the fallback implementation has been drastically improved:
   * If possible use C++14 global sized deallocators, allowing memory measurements
     across the entire C++ code base with minimal runtime overhead.
     Since we only depend on C++11, a lengthy Autoconf check had to be introduced.
   * Use malloc_usable_size() with global non-sized deallocators to
     measure the approx. memory usage of the entire process (at least
     the ones done via C++).
     The cheaper C++11 sized deallocators implemented via SciTECO::Object still
     have precedence, so this affects Scintilla code only.
 * With both improvements the test case
   sciteco -e '&lt;@EU[X^E\a]"^E\a"%a&gt;'
   is handled sufficiently well now on glibc and performance is much better
   now.
 * The jemalloc-specific technique has been removed since it no longer
   brings any benefits compared to the improved fallback technique.
   Even the case of using malloc_usable_size() in strict C++ mode is
   up to 3 times faster.
 * The new fallback implementation might actually be good enough for
   Windows as well if some MSVCRT-specific support is added, like
   using _msize() instead of malloc_usable_size().
   This must be tested and benchmarked, so we keep the Windows-specific
   implementation for the time being.
</pre>
</div>
</content>
</entry>
<entry>
<title>roll back to the old mallinfo() implementation of memory limiting on Linux and added a FreeBSD/jemalloc-specific implementation</title>
<updated>2017-03-06T21:09:17+00:00</updated>
<author>
<name>Robin Haberkorn</name>
<email>robin.haberkorn@googlemail.com</email>
</author>
<published>2017-03-06T16:34:45+00:00</published>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/commit/?id=a2e52ca49c6a5495f134648e91647008dca4a742'/>
<id>a2e52ca49c6a5495f134648e91647008dca4a742</id>
<content type='text'>
 * largely reverts 39cfc573, but leaves in minor and documentation
   changes.
 * further experimentation of memory limiting using malloc() wrapping
   has shown additional problems, like dlsym() calling malloc-functions,
   further reducing the implementation to glibc-specific means.
   This means there had been no implementation for FreeBSD and checks
   would have to rely on undocumented internal implementation details
   of different libcs, which is not a good thing.
   * Other problems have been identified, like having to wrap calloc(),
     guarding against underruns and multi-thread safety had been identified
     but could be worked around.
 * A technique by calculating the memory usage as sbrk(0) - &amp;end
   has been shown to be effective enough, at least on glibc.
   However even on glibc it has shortcomings since malloc() will
   somtimes use mmap() for allocations and the technique
   relies on implementation details of the libc.
   Furthermore another malloc_trim(0) had to be added to the error
   recovery in interactive mode, since glibc does not adjust the program break
   automatically (to avoid syscalls I presume).
 * On FreeBSD/jemalloc, the sbrk(0) method totally fails because jemalloc
   exclusively allocates via mmap() -&gt; that solution was discarded as well.
 * Since all evaluated techniques turn out to be highly platform
   specific, I reverted to the simple and stable platform-specific
   mallinfo() API on Linux.
 * On FreeBSD/jemalloc, it's possible to use mallctl("stats.allocated")
   for the same purpose - so it works there, too now.
   It's slower than the other techniques, though.
 * A lengthy discussion has been added to memory.cpp, so that we
   do not repeat the previous mistakes.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 * largely reverts 39cfc573, but leaves in minor and documentation
   changes.
 * further experimentation of memory limiting using malloc() wrapping
   has shown additional problems, like dlsym() calling malloc-functions,
   further reducing the implementation to glibc-specific means.
   This means there had been no implementation for FreeBSD and checks
   would have to rely on undocumented internal implementation details
   of different libcs, which is not a good thing.
   * Other problems have been identified, like having to wrap calloc(),
     guarding against underruns and multi-thread safety had been identified
     but could be worked around.
 * A technique by calculating the memory usage as sbrk(0) - &amp;end
   has been shown to be effective enough, at least on glibc.
   However even on glibc it has shortcomings since malloc() will
   somtimes use mmap() for allocations and the technique
   relies on implementation details of the libc.
   Furthermore another malloc_trim(0) had to be added to the error
   recovery in interactive mode, since glibc does not adjust the program break
   automatically (to avoid syscalls I presume).
 * On FreeBSD/jemalloc, the sbrk(0) method totally fails because jemalloc
   exclusively allocates via mmap() -&gt; that solution was discarded as well.
 * Since all evaluated techniques turn out to be highly platform
   specific, I reverted to the simple and stable platform-specific
   mallinfo() API on Linux.
 * On FreeBSD/jemalloc, it's possible to use mallctl("stats.allocated")
   for the same purpose - so it works there, too now.
   It's slower than the other techniques, though.
 * A lengthy discussion has been added to memory.cpp, so that we
   do not repeat the previous mistakes.
</pre>
</div>
</content>
</entry>
<entry>
<title>memory limiting: libc malloc() and realloc() can return NULL</title>
<updated>2017-03-05T17:55:46+00:00</updated>
<author>
<name>Robin Haberkorn</name>
<email>robin.haberkorn@googlemail.com</email>
</author>
<published>2017-03-05T17:55:46+00:00</published>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/commit/?id=d9e384e47f44ceadd5738cfaf885aa10260d1923'/>
<id>d9e384e47f44ceadd5738cfaf885aa10260d1923</id>
<content type='text'>
 * shouldn't make much of a difference, since we're in deep trouble
   when they return NULL, but the wrappers should be transparent
   instead of crashing in malloc_usable_size().
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 * shouldn't make much of a difference, since we're in deep trouble
   when they return NULL, but the wrappers should be transparent
   instead of crashing in malloc_usable_size().
</pre>
</div>
</content>
</entry>
<entry>
<title>replaced Linux-specific mallinfo()-based memory limiting with a more portable and faster hack</title>
<updated>2017-03-05T17:15:05+00:00</updated>
<author>
<name>Robin Haberkorn</name>
<email>robin.haberkorn@googlemail.com</email>
</author>
<published>2017-03-05T17:15:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/commit/?id=39cfc5731695c46a337606da9bc86a659dbad5b3'/>
<id>39cfc5731695c46a337606da9bc86a659dbad5b3</id>
<content type='text'>
 * Works by "hooking" into malloc() and friends and counting the
   usable heap object sizes with malloc_usable_size().
   Thus, it has no memory-overhead.
 * Will work at least on Linux and (Free)BSD.
   Other UNIXoid systems may work as well - this is tested by ./configure.
 * Usually faster than even the fallback implementation since the
   memory limit is hit earlier.
 * A similar approach could be tried on Windows (TODO).
 * A proper memory-limiting counting all malloc()s in the system can make
   a huge difference as this test case shows:
   sciteco -e '&lt;@EU[X^E\a]"^E\a"%a&gt;'
   It will allocate gigabytes before hitting the 500MB memory limit...
 * Fixed the UNIX-function checks on BSDs.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 * Works by "hooking" into malloc() and friends and counting the
   usable heap object sizes with malloc_usable_size().
   Thus, it has no memory-overhead.
 * Will work at least on Linux and (Free)BSD.
   Other UNIXoid systems may work as well - this is tested by ./configure.
 * Usually faster than even the fallback implementation since the
   memory limit is hit earlier.
 * A similar approach could be tried on Windows (TODO).
 * A proper memory-limiting counting all malloc()s in the system can make
   a huge difference as this test case shows:
   sciteco -e '&lt;@EU[X^E\a]"^E\a"%a&gt;'
   It will allocate gigabytes before hitting the 500MB memory limit...
 * Fixed the UNIX-function checks on BSDs.
</pre>
</div>
</content>
</entry>
<entry>
<title>updated copyright to 2017</title>
<updated>2017-03-03T14:32:57+00:00</updated>
<author>
<name>Robin Haberkorn</name>
<email>robin.haberkorn@googlemail.com</email>
</author>
<published>2017-03-03T14:32:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/commit/?id=0bbcd7652a948424156968298e4d2f27b998cfe2'/>
<id>0bbcd7652a948424156968298e4d2f27b998cfe2</id>
<content type='text'>
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
</pre>
</div>
</content>
</entry>
<entry>
<title>partially reversed/fixed-up b7ff56db631: avoid g_slice allocators and performance issues with memory measurements</title>
<updated>2016-11-22T17:03:48+00:00</updated>
<author>
<name>Robin Haberkorn</name>
<email>robin.haberkorn@googlemail.com</email>
</author>
<published>2016-11-21T15:58:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/commit/?id=20fcf2feccbe2c48ee33cee73ed8bf9a6d4a06a2'/>
<id>20fcf2feccbe2c48ee33cee73ed8bf9a6d4a06a2</id>
<content type='text'>
 * Fixed build problems on Windows
 * g_slice on Windows has been shown to be of little use either
   and it does not work well with the GetProcessMemoryInfo()
   measurements.
   Also, it brings the same problem as on Glibc: Not even command-line
   termination returns the memory to the OS.
   Therefore, we don't use g_slice at all and commented on it.
 * The custom Linux and Windows memory measurement approaches
   have been shown to be inefficient.
   As a workaround, scripts disable memory limiting.
 * A better approach -- but it will only work on Glibc -- might
   be to hook into malloc(), realloc() and free() globally
   and use the malloc_usable_size() of a heap object for
   memory measurements. This will be relatively precise and cheap.
 * We still need the "Object" base class in order to measure
   memory usage as a fallback approach.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 * Fixed build problems on Windows
 * g_slice on Windows has been shown to be of little use either
   and it does not work well with the GetProcessMemoryInfo()
   measurements.
   Also, it brings the same problem as on Glibc: Not even command-line
   termination returns the memory to the OS.
   Therefore, we don't use g_slice at all and commented on it.
 * The custom Linux and Windows memory measurement approaches
   have been shown to be inefficient.
   As a workaround, scripts disable memory limiting.
 * A better approach -- but it will only work on Glibc -- might
   be to hook into malloc(), realloc() and free() globally
   and use the malloc_usable_size() of a heap object for
   memory measurements. This will be relatively precise and cheap.
 * We still need the "Object" base class in order to measure
   memory usage as a fallback approach.
</pre>
</div>
</content>
</entry>
<entry>
<title>fixed glib warnings about using g_mem_set_vtable() and revised memory limiting</title>
<updated>2016-11-20T17:18:36+00:00</updated>
<author>
<name>Robin Haberkorn</name>
<email>robin.haberkorn@googlemail.com</email>
</author>
<published>2016-11-20T08:00:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.fmsbw.de/sciteco/commit/?id=b7ff56db631be7416cf228dff89cb23d753e4ec8'/>
<id>b7ff56db631be7416cf228dff89cb23d753e4ec8</id>
<content type='text'>
 * we were basing the glib allocators on throwing std::bad_alloc just like
   the C++ operators. However, this always was unsafe since we were throwing
   exceptions across plain-C frames (Glib).
   Also, the memory vtable has been deprecated in Glib, resulting in
   ugly warnings.
 * Instead, we now let the C++ new/delete operators work like Glib
   by basing them on g_malloc/g_slice.
   This means they will assert and the application will terminate
   abnormally in case of OOM. OOMs cannot be handled properly anyway, so it is
   more important to have a good memory limiting mechanism.
 * Memory limiting has been completely revised.
   Instead of approximating undo stack sizes using virtual methods
   (which is unprecise and comes with a performance penalty),
   we now use a common base class SciTECO::Object to count the memory
   required by all objects allocated within SciTECO.
   This is less precise than using global replacement new/deletes
   which would allow us to control allocations in all C++ code including
   Scintilla, but they are only supported as of C++14 (GCC 5) and adding compile-time
   checks would be cumbersome.
   In any case, we're missing Glib allocations (esp. strings).
 * As a platform-specific extension, on Linux/glibc we use mallinfo()
   to count the exact memory usage of the process.
   On Windows, we use GetProcessMemoryInfo() -- the latter implementation
   is currently UNTESTED.
 * We use g_malloc() for new/delete operators when there is
   malloc_trim() since g_slice does not free heap chunks properly
   (probably does its own mmap()ing), rendering malloc_trim() ineffective.
   We've also benchmarked g_slice on Linux/glib (malloc_trim() shouldn't
   be available elsewhere) and found that it brings no significant
   performance benefit.
   On all other platforms, we use g_slice since it is assumed
   that it at least does not hurt.
   The new g_slice based allocators should be tested on MSVCRT
   since I assume that they bring a significant performance benefit
   on Windows.
 * Memory limiting does now work in batch mode as well and is still
   enabled by default.
 * The old UndoTokenWithSize CRTP hack could be removed.
   UndoStack operations should be a bit faster now.
   But on the other hand, there will be an overhead due to repeated
   memory limit checking on every processed character.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 * we were basing the glib allocators on throwing std::bad_alloc just like
   the C++ operators. However, this always was unsafe since we were throwing
   exceptions across plain-C frames (Glib).
   Also, the memory vtable has been deprecated in Glib, resulting in
   ugly warnings.
 * Instead, we now let the C++ new/delete operators work like Glib
   by basing them on g_malloc/g_slice.
   This means they will assert and the application will terminate
   abnormally in case of OOM. OOMs cannot be handled properly anyway, so it is
   more important to have a good memory limiting mechanism.
 * Memory limiting has been completely revised.
   Instead of approximating undo stack sizes using virtual methods
   (which is unprecise and comes with a performance penalty),
   we now use a common base class SciTECO::Object to count the memory
   required by all objects allocated within SciTECO.
   This is less precise than using global replacement new/deletes
   which would allow us to control allocations in all C++ code including
   Scintilla, but they are only supported as of C++14 (GCC 5) and adding compile-time
   checks would be cumbersome.
   In any case, we're missing Glib allocations (esp. strings).
 * As a platform-specific extension, on Linux/glibc we use mallinfo()
   to count the exact memory usage of the process.
   On Windows, we use GetProcessMemoryInfo() -- the latter implementation
   is currently UNTESTED.
 * We use g_malloc() for new/delete operators when there is
   malloc_trim() since g_slice does not free heap chunks properly
   (probably does its own mmap()ing), rendering malloc_trim() ineffective.
   We've also benchmarked g_slice on Linux/glib (malloc_trim() shouldn't
   be available elsewhere) and found that it brings no significant
   performance benefit.
   On all other platforms, we use g_slice since it is assumed
   that it at least does not hurt.
   The new g_slice based allocators should be tested on MSVCRT
   since I assume that they bring a significant performance benefit
   on Windows.
 * Memory limiting does now work in batch mode as well and is still
   enabled by default.
 * The old UndoTokenWithSize CRTP hack could be removed.
   UndoStack operations should be a bit faster now.
   But on the other hand, there will be an overhead due to repeated
   memory limit checking on every processed character.
</pre>
</div>
</content>
</entry>
</feed>
