Age | Commit message (Collapse) | Author | Files | Lines |
|
* It turns out that `bool` (_Bool) in bitfields may cause
padding to the next 32-bit word.
This was only observed on MinGW.
I am not entirely sure why, although the C standard does
not guarantee much with regard to bitfield memory layout
and there are 64-bit available due to passing anyway.
Actually, they could also be layed out in a different order.
* I am now consistently using guint instead of `bool` in bitfields
to prevent any potential surprises.
* The way that guint was aliased with bitfield structs
for undoing teco_machine_main_t and teco_machine_qregspec_t flags
was therefore insecure.
It was not guaranteed that the __flags field really "captures"
all of the bit field.
Even with `guint v : 1` fields, this was not guaranteed.
We would have required a static assertion for robustness.
Alternatively, we could have declared a `gsize __flags` variable
as well. This __should__ be safe since gsize should always be
pointer sized and correspond to the platform's alignment.
However, it's also not 100% guaranteed.
Using classic ANSI C enums with bit operations to encode multiple
fields and flags into a single integer also doesn't look very
attractive.
* Instead, we now define scalar types with their own teco_undo_push()
shortcuts for the bitfield structs.
This is in one way simpler and much more robust, but on the other
hand complicates access to the flag variables.
* It's a good question whether a `struct __attribute__((packed))` bitfield
with guint fields would be a reliable replacement for flag enums, that
are communicated with the "outside" (TECO) world.
I am not going to risk it until GCC gives any guarantees, though.
For the time being, bitfields are only used internally where
the concrete memory layout (bit positions) is not crucial.
* This fixes the test suite and therefore probably CI and nightly
builds on Windows.
* Test case: Rub out `@I//` or `@Xq` until before the `@`.
The parser doesn't know that `@` is still set and allows
all sorts of commands where `@` should be forbidden.
* It's unknown how long this has been broken on Windows - quite
possibly since v2.0.
|
|
* Instead of separate stand-alone commands, they are now allowed only immediately
in front of the commands that accept them.
* The order is still insignificant if both `@` and `:` are accepted.
* The number of colon modifiers is now also checked.
We basically get this for free.
* `@` has syntactic significance, so it could not be set conditionally anyway.
Still, it was possible to provoke bugs were `@` was interpreted conditionally
as in `@ 2<I/foo/$>`.
* Even when not causing bugs, a mistyped `@` would often influence the
__next__ command, causing unexpected behavior, for instance when
typing `@(233C)W`.
* While it was theoretically possible to set `:` conditionally, it could also
be "passed through" accidentally to some command where it wasn't expected as in
`:Ifoo$ C`.
I do not know of any real useful application or idiom of a conditionally set `:`.
If there would happen to be some kind of useful application, `:'` and `:|` could
be re-allowed easily, though.
* I was condidering introducing a common parser state for modified commands,
but that would have been tricky and introduce a lot of redundant command lists.
So instead, we now simply everywhere check for excess modifiers.
To simplify this task, teco_machine_main_transition_t now contains flags
signaling whether the transition is allowed with `@` or `:` modifiers set.
It currently only has to be checked in the start state, after `E` and `F`.
|
|
* Curses allows scrolling with the scroll wheel at least
if mouse support is enabled via ED flags.
Gtk always supported that.
* Allow clicking on popup entries to fully autocomplete them.
Since this behavior - just like auto completions - is parser state-dependant,
I introduced a new state method (insert_completion_cb).
All the implementations are currently in cmdline.c since there is some overlap
with the process_edit_cmd_cb implementations.
* Fixed pressing undefined function keys while showing the popup.
The popup area is no longer redrawn/replaced with the Scintilla view.
Instead, continue to show the popup.
|
|
|
|
* this works by embedding the SciTECO parser and driving it always (exclusively)
in parse-only mode.
* A new teco_state_t::style determines the Scintilla style for any character
accepted in the given state.
* Therefore, the SciTECO lexer is always 100% exact and corresponds to the current
SciTECO grammer - it does not have to be maintained separately.
There are a few exceptions and tweaks, though.
* The contents of curly-brace escapes (`@^Uq{...}`) are rendered as ordinary
code using a separate parser instance.
This can be disabled with the lexer.sciteco.macrodef property.
Unfortunately, SciTECO does not currently allow setting lexer properties (FIXME).
* Labels and comments are currently styled the same.
This could change in the future once we introduce real comments.
* Lexers are usually implemented in C++, but I did not want to draw in C++.
Especially not since we'd have to include parser.h and other SciTECO headers,
that really do not want to keep C++-compatible.
Instead, the lexer is implemented "in the container".
@ES/SCI_SETILEXER/sciteco/ is internally translated to SCI_SETILEXER(NULL)
and we get Scintilla notifications when styling the view becomes necessary.
This is then centrally forwarded to the teco_lexer_style() which
uses the ordinary teco_view_ssm() API for styling.
* Once the command line becomes a Scintilla view even on Curses,
we can enabled syntax highlighting of the command line macro.
|
|
::FS as well)
* The colon modifier can now occur 2 times.
Specifying `@` more than once or `:` more than twice is an error now.
* Commands do not check for excess colon modifiers - almost every command would have
to check it. Instead, a double colon will simply behave like a single colon on most
commands.
* All search commands inherit the anchored semantics, but it's not very useful in some combinations
like -::S, ::N or ::FK.
That's why the `::` variants are not documented everywhere.
* The lexer.checkheader macro could be simplified and should also be faster now,
speeding up startup.
Eventually this macro can be made superfluous, e.g. by using 1:FB or 0,1^Q::S.
|
|
* Can be freely combined with the colon-modifier as well.
:@Xq cut-appends to register q.
* This simply deletes the given buffer range after the copy or append operation
as if followed by another <K> command.
* This has indeed been a very annoying missing feature, as you often have to retype the
range for a K or D command.
At the same time, this cannot be reasonably solved with a macro since macros
do not accept Q-Register arguments -- so we would have to restrict ourselves to one or a few
selected registers.
I was also considering to solve this with a special stack operation that duplicates the
top values, so that Xq leaves arguments for K, but this couldn't work for cutting lines
and would also be longer to type.
* It's the first non-string command that accepts @.
Others may follow in the future.
We're approaching ITS TECO madness levels.
|
|
|
|
characters in Q-Register)
* It was initialized only once, so it could inherit the wrong local Q-Register table.
A test case has been added for this particular bug.
* Also, if starting from the profile (batch mode), the state machine could be initialized
without undo, which then later cause problems on rubout in interactive mode.
For instance, if S^EG[a] fails and you would repeatedly type `]`, the Q-Reg name could
grow indefinitely. There were probably other issues as well.
Even crashes should have been possible, although I couldn't reproduce them.
* Since the state machine is required only for the pattern to regexp translation
and is performed anew for every character in interactive mode,
we now create a fresh state machine for every call and don't attempt
any undo.
There might be more efficient ways, like reusing the string building's
Q-Reg parser state machine.
|
|
jumping to the beginning of the macro)
* I am not sure whether this feature is really that useful...
* teco_machine_main_t::macro_pc is now pointing to the __next__ character to execute,
therefore it's easier to manipulate by flow control commands.
Also, it can now be unsigned (gsize) like all other program counters.
* Detected thanks to running the testsuite under Valgrind.
|
|
This reverts commit 024d26ac0cd869826801889f1299df34676fdf57.
This was re-introducing Clang warnings since gboolean is signed.
I should have read the git blame before re-introducing gboolean...
|
|
current state machine
* The previous solution was not wrong, but unnecessarily complex. We already have a flag
for exactly this purpose.
* Avoid redundancies by introducing teco_machine_stringbuilding_set_codepage().
|
|
allow breaking from within braces
For instance, you can now write <23(1;)> without leaving anything on the stack.
|
|
|
|
* This fixes F< to the beginning of the macro, which was broken in 73d574b71a10d4661ada20275cafde75aff6c1ba.
teco_machine_main_t::macro_pc actually has to be signed as it is sometimes set to -1.
|
|
* ALL keypresses (the UTF-8 sequences resulting from key presses) can now be remapped.
* This is especially useful with Unicode support, as you might want to alias
international characters to their corresponding latin form in the start state,
so you don't have to change keyboard layouts so often.
This is done automatically in Gtk, where we have hardware key press information,
but has to be done with key macros in Curses.
There is a new key mask 4 (bit 3) for that purpose now.
* Also, you might want to define non-ANSI letters to perform special functions in
the start state where it won't be accepted by the parser anyway.
Suppose you have a macro M→, you could define
@^U[^K→]{m→} 1^_U[^K→]
This effectively "extends" the parser and allow you to call macro "→" by a single
key press. See also #5.
* The register prefix has been changed from ^F (for function) to ^K (for key).
This is the only thing you have to change in order to migrate existing
function key macros.
* Key macros are enabled by default. There is no longer any way to disable
function key handling in curses, as I never found any reason or need to disable it.
Theoretically, the default ESCDELAY could turn out to be too small and function
keys don't get through. I doubt that's possible unless on extremely slow serial lines.
Even then, you'd have to increase ESCDELAY and instead of disabling function keys
simply define an escape surrogate.
* The ED flag has been removed and its place is reserved for a future mouse support flag
(which does make sense to disable in curses sometimes).
fnkeys.tes is consequently also enabled by default in sample.teco_ini.
* Key macros are handled as an unit. If one character results in an error,
the entire string is rubbed out.
This fixes the "CLOSE" key on Gtk.
It also makes sure that the original error message is preserved and not overwritten
by some subsequent syntax error.
It was never useful that we kept inserting characters after the first error.
|
|
* This is used for error messages (TECO macro stackframes),
so it's important to display columns in characters.
* Program counters are in bytes and therefore everywhere gsize.
This is by glib convention.
|
|
The following rules apply:
* All SciTECO macros __must__ be in valid UTF-8, regardless of the
the register's configured encoding.
This is checked against before execution, so we can use glib's non-validating
UTF-8 API afterwards.
* Things will inevitably get slower as we have to validate all macros first
and convert to gunichar for each and every character passed into the parser.
As an optimization, it may make sense to have our own inlineable version of
g_utf8_get_char() (TODO).
Also, Unicode glyphs in syntactically significant positions may be case-folded -
just like ASCII chars were. This is is of course slower than case folding
ASCII. The impact of this should be measured and perhaps we should restrict
case folding to a-z via teco_ascii_toupper().
* The language itself does not use any non-ANSI characters, so you don't have to
use UTF-8 characters.
* Wherever the parser expects a single character, it will now accept an arbitrary
Unicode/UTF-8 glyph as well.
In other words, you can call macros like M§ instead of having to write M[§].
You can also get the codepoint of any Unicode character with ^^x.
Pressing an Unicode character in the start state or in Ex and Fx will now
give a sane error message.
* When pressing a key which produces a multi-byte UTF-8 sequence, the character
gets translated back and forth multiple times:
1. It's converted to an UTF-8 string, either buffered or by IME methods (Gtk).
On Curses we could directly get a wide char using wget_wch(), but it's
not currently used, so we don't depend on widechar curses.
2. Parsed into gunichar for passing into the edit command callbacks.
This also validates the codepoint - everything later on can assume valid
codepoints and valid UTF-8 strings.
3. Once the edit command handling decides to insert the key into the command line,
it is serialized back into an UTF-8 string as the command line macro has
to be in UTF-8 (like all other macros).
4. The parser reads back gunichars without validation for passing into
the parser callbacks.
* Flickering in the Curses UI and Pango warnings in Gtk, due to incompletely
inserted and displayed UTF-8 sequences, are now fixed.
|
|
* Eg. when typing with a Russian layout, CTRL+I will always insert ^I.
* Works with all of the start-state command Ex, Fx, ^x commands and
string building constructs.
This is exactly where process_edit_cmd_cb() case folds case-insensitive
characters.
The corresponding state therefore sets an is_case_insensitive flag now.
* Does not yet work with anything embedded into Q-Register specifications.
This could only be realized with a new state callback (is_case_insensitive()?)
that chains to the Q-Register and string building states recursively.
* Also it doesn't work with Ё on my Russian phonetic layout,
probably because the ANSI caret on that same key is considered dead
and not returned by gdk_keyval_to_unicode().
Perhaps we should directly wheck the keyval values?
* Whenever a non-ANSI key is pressed in an allowed state,
we try to check all other keyvals that could be produced by the same
hardware keycode, ie. we check all groups (keyboard layouts).
|
|
or codepoints) (refs #5)
* This is trickier than it sounds because there isn't one single place to consult.
It depends on the context.
If the string argument relates to buffer contents - as in <I>, <S>, <FR> etc. -
the buffer's encoding is consulted.
If it goes into a register (EU), the register's encoding is consulted.
Everything else (O, EN, EC, ES...) expects only Unicode codepoints.
* This is communicated through a new field teco_machine_stringbuilding_t::codepage
which must be set in the states' initial callback.
* Seems overkill just for ^EUq, but it can be used for context-sensitive
processing of all the other string building constructs as well.
* ^V and ^W cannot be supported for Unicode characters for the time being without an Unicode-aware parser
|
|
* gboolean cannot be used since it is a signed type
* bool is still more readable, even though we mostly use glib typedefs.
* AFAIK the glib types are deprecated, so sooner or later we will switch
to stdint/stdbool types anyway.
|
|
|
|
(-Wsingle-bit-bitfield-constant-conversion)
* gboolean is defined as gint which is a signed type.
A gboolean 1-bit-wide bitfield cannot have the values 0 and 1 but only 0 and -1.
* This wasn't practically a bug unless you would try to compare one of those bitfields
with TRUE.
* All of those bitfields are now guint, even though this is less self-documenting.
|
|
|
|
|
|
* Previous Scintilla version was 3.6.4 and Scinterm was 1.7 (with lots of custom patches).
All of the patches are now either irrelevant or have been merged upstream.
* Since Scintilla 5 requires C++17, this increases the minimum GCC version at least
to 5.0. We may actually require even newer versions.
* I could not upgrade the scintilla-mirror (which was imported from Mercurial),
so the old sciteco-dev branch was renamed to sciteco-dev-pre-v2.0.0,
master was deleted and I reimported the entire Scintilla repo using
git-remote-hg.
This means that scintilla-mirror now contains two entirely separate trees.
But it is still possible to clone old SciTECO repos.
* The strategy/workflow of maintaining hotfix branches on scintilla-mirror has been changed.
Instead of having one sciteco-dev branch that is rebased onto new Scintilla upstream
releases and tagging SciTECO releases in scintilla-mirror (to keep the commits referenced),
we now create a branch for every Scintilla version we are based on (eg. sciteco-rel-5-1-3).
This branch is never rebased or deleted. Therefore, we are guaranteed to be able to
clone arbitrary SciTECO repo commits - not only releases.
Releases no longer have to be tagged in scintilla-mirror.
On the downside, fixup commits may accumulate in these new branches.
They can only be squashed once a new branch for a new Scintilla release is created
(e.g. by cherry-picking followed by rebase).
* Scinterm does no longer have to reside in the Scintilla subdirectory,
so we added it as a regular submodule.
There are no more recursive submodules.
The Scinterm build system has not been improved at all, but we use
a trick based on VPATH to build Scinterm in scintilla/bin/.
* Scinterm is now in Git and we reference the upstream repo for the
time being.
We might mirror it and apply the same branching workflow as with Scintilla
if necessary.
The scinterm-mirror repository still exists but has not been touched.
We will also have to rewrite its master branch as it was a non-reproducible
Mercurial import.
* Scinterm now also comes with patches for Scintilla which we simply applied
on our sciteco-rel-5-1-3 branch.
* Scintilla 5 outsourced its lexers into the Lexilla project.
We added it as yet another submodule.
* All submodules have been moved into contrib/.
* The Scintilla API for setting lexers has consequently changed.
We now have to call SCI_SETILEXER(0, CreateLexer(name)).
As I did not want to introduce a separate command for setting lexers,
<ES> has been extended to allow setting lexers by name with the SCI_SETILEXER
message which effectively replaces SCI_SETLEXERLANGUAGE.
* The lexer macros (SCLEX_...) no longer serve any purpose - they weren't used
in the SciTECO standard library anyway - and have consequently been removed
from symbols-scilexer.c.
The style macros from SciLexer.h (SCE_...) are theoretically still useful - even
though they are not used by our current color schemes - and have therefore been
retained. They can be specified as wParam in <ES>.
* <ES> no longer allows symbolic constants for lParam.
This never made any sense since all supported symbols were always wParam.
* Scinterm supports new native cursor modes.
They are not used for the time being and the previous CARETSTYLE_BLOCK_AFTER
caret style is configured by default.
It makes no sense to enable native cursor modes now since the
command line should have a native cursor but is not yet a Scintilla view.
* The Scintilla upgrade performed much worse than before,
so some optimizations will be necessary.
|
|
This is a total conversion of SciTECO to plain C (GNU C11).
The chance was taken to improve a lot of internal datastructures,
fix fundamental bugs and lay the foundations of future features.
The GTK user interface is now in an useable state!
All changes have been squashed together.
The language itself has almost not changed at all, except for:
* Detection of string terminators (usually Escape) now takes
the string building characters into account.
A string is only terminated outside of string building characters.
In other words, you can now for instance write
I^EQ[Hello$world]$
This removes one of the last bits of shellisms which is out of
place in SciTECO where no tokenization/lexing is performed.
Consequently, the current termination character can also be
escaped using ^Q/^R.
This is used by auto completions to make sure that strings
are inserted verbatim and without unwanted sideeffects.
* All strings can now safely contain null-characters
(see also: 8-bit cleanliness).
The null-character itself (^@) is not (yet) a valid SciTECO
command, though.
An incomplete list of changes:
* We got rid of the BSD headers for RB trees and lists/queues.
The problem with them was that they used a form of metaprogramming
only to gain a bit of type safety. It also resulted in less
readble code. This was a C++ desease.
The new code avoids metaprogramming only to gain type safety.
The BSD tree.h has been replaced by rb3ptr by Jens Stimpfle
(https://github.com/jstimpfle/rb3ptr).
This implementation is also more memory efficient than BSD's.
The BSD list.h and queue.h has been replaced with a custom
src/list.h.
* Fixed crashes, performance issues and compatibility issues with
the Gtk 3 User Interface.
It is now more or less ready for general use.
The GDK lock is no longer used to avoid using deprecated functions.
On the downside, the new implementation (driving the Gtk event loop
stepwise) is even slower than the old one.
A few glitches remain (see TODO), but it is hoped that they will
be resolved by the Scintilla update which will be performed soon.
* A lot of program units have been split up, so they are shorter
and easier to maintain: core-commands.c, qreg-commands.c,
goto-commands.c, file-utils.h.
* Parser states are simply structs of callbacks now.
They still use a kind of polymorphy using a preprocessor trick.
TECO_DEFINE_STATE() takes an initializer list that will be
merged with the default list of field initializers.
To "subclass" states, you can simply define new macros that add
initializers to existing macros.
* Parsers no longer have a "transitions" table but the input_cb()
may use switch-case statements.
There are also teco_machine_main_transition_t now which can
be used to implement simple transitions. Additionally, you
can specify functions to execute during transitions.
This largely avoids long switch-case-statements.
* Parsers are embeddable/reusable now, at least in parse-only mode.
This does not currently bring any advantages but may later
be used to write a Scintilla lexer for TECO syntax highlighting.
Once parsers are fully embeddable, it will also be possible
to run TECO macros in a kind of coroutine which would allow
them to process string arguments in real time.
* undo.[ch] still uses metaprogramming extensively but via
the C preprocessor of course. On the downside, most undo
token generators must be initiated explicitly (theoretically
we could have used embedded functions / trampolines to
instantiate automatically but this has turned out to be
dangereous).
There is a TECO_DEFINE_UNDO_CALL() to generate closures for
arbitrary functions now (ie. to call an arbitrary function
at undo-time). This simplified a lot of code and is much
shorter than manually pushing undo tokens in many cases.
* Instead of the ridiculous C++ Curiously Recurring Template
Pattern to achieve static polymorphy for user interface
implementations, we now simply declare all functions to
implement in interface.h and link in the implementations.
This is possible since we no longer hace to define
interface subclasses (all state is static variables in
the interface's *.c files).
* Headers are now significantly shorter than in C++ since
we can often hide more of our "class" implementations.
* Memory counting is based on dlmalloc for most platforms now.
Unfortunately, there is no malloc implementation that
provides an efficient constant-time memory counter that
is guaranteed to decrease when freeing memory.
But since we use a defined malloc implementation now,
malloc_usable_size() can be used safely for tracking memory use.
malloc() replacement is very tricky on Windows, so we
use a poll thread on Windows. This can also be enabled
on other supported platforms using --disable-malloc-replacement.
All in all, I'm still not pleased with the state of memory
limiting. It is a mess.
* Error handling uses GError now. This has the advantage that
the GError codes can be reused once we support error catching
in the SciTECO language.
* Added a few more test suite cases.
* Haiku is no longer supported as builds are instable and
I did not manage to debug them - quite possibly Haiku bugs
were responsible.
* Glib v2.44 or later are now required.
The GTK UI requires Gtk+ v3.12 or later now.
The GtkFlowBox fallback and sciteco-wrapper workaround are
no longer required.
* We now extensively use the GCC/Clang-specific g_auto
feature (automatic deallocations when leaving the current
code block).
* Updated copyright to 2021.
SciTECO has been in continuous development, even though there
have been no commits since 2018.
* Since these changes are so significant, the target release has
been set to v2.0.
It is planned that beginning with v3.0, the language will be
kept stable.
|
|
detect EMCurses
* Emscripten can be used (theoretically) to build a host-only platform-independant version
of SciTECO (running under node.js instead of the browser).
* I ported netbsd-curses with Emscripten for that purpose. Therefore, adaptions for running
in the browser are restricted to EMcurses now.
|
|
* when enabled, it will automatically upper-case all
one or two letter commands (which are case insensitive).
* also affects the up-carret control commands, so they when inserted
they look more like real control commands.
* specifically does not affect case-insensitive Q-Register specifications
* the result are command lines that are better readable and conform
to the coding style used in SciTECO's standard library.
This eases reusing command lines as well.
* Consequently, string-building and pattern match characters should
be case-folded as well, but they aren't currently since
State::process_edit_cmd() does not have sufficient insight
into the MicroStateMachines. Also, it could not be delegated
to the MicroStateMachines.
Perhaps they should be abandoned in favour of embeddedable
regular state machines; or regular state machines with a stack
of return states?
|
|
editing key
* StateEscape should return the same fnmacro mask as StateStart
* When rubbing out a command, we should stop at StateEscape as well.
Therefore we reintroduced States::is_start().
RTTI is still not used.
|
|
as State::process_edit_cmd() virtual methods
* Cmdline::process_edit_cmd() was much too long and deeply nested.
It used RTTI excessively to implement the state-specific behaviour.
It became apparent that the behaviour is largely state-specific and could be
modelled much more elegantly as virtual methods of State.
* Basically, a state can now implement a method to customize its
commandline behaviour.
In the case that the state does not define custom behaviour for
the key pressed, it can "chain" to the parent class' process_edit_cmd().
This can be optimized to tail calls by the compiler.
* The State::process_edit_cmd() implementations are still isolated in
cmdline.cpp. This is not strictly necessary but allows us keep the
already large compilations units like parser.cpp small.
Also, the edit command processing has little to do with the rest of
a state's functionality and is only used in interactive mode.
* As a result, we have many small functions now which are much easier to
maintain.
This makes adding new and more complex context sensitive editing behaviour
easier.
* State-specific function key masking has been refactored by introducing
State::get_fnmacro_mask().
* This allowed us to remove the States::is_*() functions which have
always been a crutch to support context-sensitive key handling.
* RTTI is almost completely eradicated, except for exception handling
and StdError(). Both remaining cases can probably be avoided in the
future, allowing us to compile smaller binaries.
|
|
|
|
* some classic TECOs have this
* just like ^[, dollar works as a command only, not as a string terminator
* it improves the readability of macros using printable characters only
* it closes a gap in the language by allowing $$ (double-dollar) and
^[$ as printable ways to write the return from macro command.
^[^[ was not and is not possible.
* since command line termination is a regular interactive return-command
in SciTECO, double-dollar will also terminate the command line now.
This will be allowed unless it turns out to be a cause of trouble.
* The handling of unterminated commands has been cleaned up by
introducing State::end_of_macro().
Most commands (and thus states) except the start state cannot be
valid at the end of a macro since this indicates an unterminated/incomplete
command.
All lookahead-commands (currently only ^[) will end implicitly
at the end of a macro and so will need a way to perform their action.
The virtual method allows these actions to be defined with the rest
of the state's implementation.
|
|
* we were basing the glib allocators on throwing std::bad_alloc just like
the C++ operators. However, this always was unsafe since we were throwing
exceptions across plain-C frames (Glib).
Also, the memory vtable has been deprecated in Glib, resulting in
ugly warnings.
* Instead, we now let the C++ new/delete operators work like Glib
by basing them on g_malloc/g_slice.
This means they will assert and the application will terminate
abnormally in case of OOM. OOMs cannot be handled properly anyway, so it is
more important to have a good memory limiting mechanism.
* Memory limiting has been completely revised.
Instead of approximating undo stack sizes using virtual methods
(which is unprecise and comes with a performance penalty),
we now use a common base class SciTECO::Object to count the memory
required by all objects allocated within SciTECO.
This is less precise than using global replacement new/deletes
which would allow us to control allocations in all C++ code including
Scintilla, but they are only supported as of C++14 (GCC 5) and adding compile-time
checks would be cumbersome.
In any case, we're missing Glib allocations (esp. strings).
* As a platform-specific extension, on Linux/glibc we use mallinfo()
to count the exact memory usage of the process.
On Windows, we use GetProcessMemoryInfo() -- the latter implementation
is currently UNTESTED.
* We use g_malloc() for new/delete operators when there is
malloc_trim() since g_slice does not free heap chunks properly
(probably does its own mmap()ing), rendering malloc_trim() ineffective.
We've also benchmarked g_slice on Linux/glib (malloc_trim() shouldn't
be available elsewhere) and found that it brings no significant
performance benefit.
On all other platforms, we use g_slice since it is assumed
that it at least does not hurt.
The new g_slice based allocators should be tested on MSVCRT
since I assume that they bring a significant performance benefit
on Windows.
* Memory limiting does now work in batch mode as well and is still
enabled by default.
* The old UndoTokenWithSize CRTP hack could be removed.
UndoStack operations should be a bit faster now.
But on the other hand, there will be an overhead due to repeated
memory limit checking on every processed character.
|
|
* Using a common implementation in RBTreeString::auto_complete().
This is very efficient even for very huge tables since only
an O(log(n)) lookup is required and then all entries with a matching
prefix are iterated. Worst-case complexity is still O(n), since all
entries may be legitimate completions.
If necessary, the number of matching entries could be restricted, though.
* Auto completes short and long Q-Reg names.
Short names are "case-insensitive" (since they are upper-cased).
Long specs are terminated with a closing bracket.
* Long spec completions may have problems with names containing
funny characters since they may be misinterpreted as string building
characters or contain braces. All the auto-completions suffered from
this problem already (see TODO).
* This greatly simplifies investigating the Q-Register name spaces
interactively and e.g. calling macros with long names, inserting
environment registers etc.
* Goto labels are terminated with commas since they may be part
of a computed goto.
* Help topics are matched case insensitive (just like the topic
lookup itself) and are terminated with the escape character.
This greatly simplifies navigating womanpages and looking up
topics with long names.
|
|
* one would expect function key macros masked for the start
state to work after ^[ ($), but since it has its own
state now, this was broken since f08187e454f56954b41d95615ca2e370ba19667e.
* Similarily command reinsertion would reinsert too much
after $, since the parser wouldn't be in the "real" start state.
* The "escape" state should be handled like the start state
(where new commands can begin)
from the perspective of the user -- the difference is not
even documented, it's an implementation detail.
|
|
optimizationa and additional checks
* undo tokens emitted by the expression stack no longer waste
memory by pointing to the stack implementation.
This uses some ugly C++ constant template arguments but
saves 4 or 8 byte per undo token depending on the architecture.
* Round braces are counted now and the return command $$ will
use this information to discard all non-relevant brace levels.
* It is an error to close a brace when none have been opened.
* The bracing rules are still very liberal, allowing you to
close braces in macros belonging to a higher call frame
or leave them open at the end of a macro.
While this is technically possible, it is perhaps a good
idea to stricten these rules in some future release.
* Loops no longer (ab)use the expression stack to store
program counters and loop counters.
This removes flow control from the responsibility of the
expression stack which is much safer now since we can control
where we jump to.
This also eased implemented proper semantics for $$.
* It is an error to leave loops open at the end of a macro
or trying to close a loop opened in the caller of the macro.
Similarily it is only possible to close a loop from the
current invocation frame.
This means it is now impossible to accidentally jump to invalid
PCs.
* Even though loop context stacks could be attached directly to
the macro invocation frame, this would be inefficient.
Instead there's a loop frame pointer now that is part of the
invocation frame. All frames will reuse the same stack structure.
* Loops are automatically discarded when returning using $$.
* Special aggregating forms of the loop start (":<") and loop
end (":>") commands are possible now and have been implemented.
This improves SciTECO's capability as a stack-oriented language.
It is no longer necessary to write recursive macros to generate
stack values of arbitrary length dynamically or to process them.
* All expression and loop stacks are still fixed-size.
It may be a good idea to implement dynamic resizing (TODO).
* Added some G_UNLIKELYs to Execute::macro(). Should improve
the branch prediction of modern CPUs.
* Local Q-Register tables are allocated on the stack now instead
of on the heap (the bulk of a table is stored on the heap anyway).
Should improve performance of macro invocations.
* Document that "F<" will jump to the beginning of the macro
if there is no loop.
This is not in standard TECO, but I consider it a useful feature.
|
|
* <$$> is faster than jumping to the end of the macro
and enables shorter code for returning values from macros.
* this also replaces $$ as an immediate editing command.
In other words, command line termination is an ordinary command
now. The old behaviour was similar to what classic TECO did.
Classic TECO however had no choice than to track key presses
directly for command line termination as it did not keep track
about the parser state as input was typed.
This led to some glitches in the language. For instance
"FS$$" would terminate the command line, unless the second escape
was typed after backspace, etc. This behaviour is not worth copying
and SciTECO did a better job than that by making sure that at least the
second escape is only effective if it is not part of language syntax.
This still lead to some undesirable cases like "ES...$$$" that would
terminate the command line unexpectedly.
To terminate the command line after something like "FS$$", you will
now have to type "FS$$$$".
* As it is a regular command now - just executed immediately - and
its properties stay close to the macro return behaviour, command line
termination may now not always be performed when $$ is typed even
as a standalone command. E.g. "Ofoo$ !bar!$$ !foo!Obar$" will
curiously terminate the command line now.
* This also means that macros can finally terminate command lines
by using the command line editing commands ({ and }) to insert
$$ into the command line macro.
This is also of interest for function key macros.
* This implementation showed some serious shortcoming in SciTECO's
current parser that yet have to be fixed.
E.g. the macro "@^Ua{<$$>}" is currently unsafe since
loops abuse the expression stack for storing their state and $$
does not touch the expression stack. Calling "Ma>" would actually
continue the loop jumping to the beginning of the command line
since program counters referring to the macro A will be reused!
This cannot be easily solved by checking for loop termination
since being able to return that way from loops is a useful
feature. This is a problem even without loops and $$, e.g. as
in "@^Ua{1,2,3(4,5} Ma)".
Instead, a kind of expression stack frame pointer must be
added to macro invocation stack frames, pointing to the beginning
of the expression stack for the current frame.
At the end of macros or on return, the stack contents of
corresponding to the frame can be discarded while preserving the
immediate arguments at the time of the return or end-of-macro.
This would stabilize SciTECO's macro semantics.
* When a top-level macro returns in batch mode, it would
be a good idea to use the last argument to calculate the
process return code, so it can be set by SciTECO scripts (TODO).
|
|
* SciTECO commands are implemented with immediate execution in mind.
Those commands that do need to execute immediately while a string
command is entered, can do so using StateExpectString::process().
For simplicity, the parser just assumed that every input character
should result in immediate execution (if the command supports it
of course).
* This lead to unnecessarily slow execution of commands like <I> or
<S> in batch mode. E.g. a search was always repeated for every
character of the pattern - a N character pattern could result in
N searches instead of one.
Also in interactive mode when executing a macro
or repeating commands in a loop, immediate processing of string arguments
is unnecessary and results in superfluous undo tokens.
* These cases are all optimized now by being informed about
the necessity of providing immediate feedback via State::refresh().
This is used by StateExpectString to defer calling process() as
long as possible.
* For states extending StateExpectString, there is no change since
they can already process arbitrarily long strings.
The optimization is hidden in StateExpectString.
* some allocations are now also avoided in StateExpectString::custom().
|
|
|
|
* this means that QRegSpecMachine::input() no longer has to return
a dummy QRegister in parse-only mode.
This saves an unnecessary QRegister table lookup and speeds up
parsing.
* QRegSpecMachine can now be easily extended to behave differently
when returning a Q-Register, e.g. simply returning NULL if a register
does not exist, or returning a register by prefix.
This is important for some planned commands.
* StateExpectQReg::got_register() now gets a QRegister *.
It can theoretically be NULL - still we don't have to check
for NULL in most cases since NULL is only passed in parse-only mode.
|
|
* expands to the value of $HOME (the env variable instead of
the register which currently makes a slight difference).
* supported for tab-completions
* supported for all file-name accepting commands.
The expansion is done centrally in StateExpectFile::done().
A new virtual method StateExpectFile::got_file() has been
introduced to pass the expanded/processed file name to
command implementations.
* sciteco(7) has been updated: There is now a separate section
on file name arguments and file name handling in SciTECO.
This information is important but has been scattered across
the document previously.
* optimized is_glob_pattern() in glob.h
|
|
working directory
* FG stands for "Folder Go"
* FG behaves similar to a Unix shell `cd`.
Without arguments, it changes to the $HOME directory.
* The $HOME directory was previously only used by $SCITECOCONFIG on Unix.
Now it is documented on its own, since the HOME directory should also
be configurable on Windows - e.g. to adapt SciTECO to a MinGW or Cygwin
installation.
HOME is initialized just like the other environment variables.
This also means that now, the $HOME Q-Register is always defined
and can be used by platform-agnostic macros.
* FG uses a new kind of tab-completion: for directories only.
It would be annoying to complete the FG command after every
directory, so this tab-completion does not close the command
automatically. Theoretically, it would be possible to close
the command after completing a directory with no subdirectories,
but this is not supported currently.
* Filename arguments are no longer completed with " " if {} escaping
is in place as this brings no benefit. Instead no completion character
is inserted for this escape mode.
* "$" was mapped to the current directory to support an elegant way to
insert/get the current directory.
Also this allows the idiom "[$ FG...new_dir...$ ]$" for changing
the current directory temporarily.
* The Q-Register stack was extended to support restoring the string
part of special Q-Registers (that overwrite the default functionality)
when using the "[$" and "]$" commands.
* fixed minor typos (american spelling)
|
|
cleanup/refactoring
* characters rubbed out are not totally removed from the command line,
but only from the *effective* command line.
* The rubbed out command line is displayed after the command line cursor.
On Curses it is grey and underlined.
* When characters are inserted that are on the rubbed out part of the command line,
the cursor simply moves forward.
NOTE: There's currently no immediate editing command for reinserting the
next character/word from the rubbed out command line.
* Characters resulting in errors are no longer simply discarded but rubbed out,
so they will stay in the rubbed out part of the command line, reminding you
which character caused the error.
* Improved Cmdline formatting on Curses UI:
* Asterisk is printed bold
* Control characters are printed in REVERSE style, similar to what
Scinterm does. The controll character formatting has thus been moved
from macro_echo() in cmdline.cpp to the UI implementations.
* Updated the GTK+ UI (UNTESTED): I did only, the most important API
adaptions. The command line still does not use any colors.
* Refactored entire command line handling:
* The command line is now a class (Cmdline), and most functions
in cmdline.cpp have been converted to methods.
* Esp. process_edit_cmd() (now Cmdline::process_edit_cmd()) has been
simplified. There is no longer the possibility of a buffer overflow
because of static insertion buffer sizes
* Cleaned up usage of the cmdline_pos variable (now Cmdline::pc) which
is really a program counter that used a different origin as macro_pc
which was really confusing.
* The new Cmdline class is theoretically 8-bit clean. However all of this
will change again when we introduce Scintilla views for the command line.
* Added 8-bit clean (null-byte aware) versions of QRegisterData::set_string()
and QRegisterData::append_string()
|
|
* also did some whitespace cleanup in SciTECO now that tabs are
displayed properly
|
|
|
|
* the ^I command was altered to insert indention characters
rather than plain tabs always.
* The <TAB> immediate editing command was added for all
insertion arguments (I, ^I but also FR and FS)
* documentation was extended for a discussion of indention
|
|
this is analoguous to EU as the string-build equivalent
of ^U.
|
|
* the GError expection has been renamed to GlibError, to avoid
nameclashes when working from the SciTECO namespace
|
|
normally, since SciTECO is not a library, this is not strictly
necessary since every library should use proper name prefixes
or namespaces for all global declarations to avoid name clashes.
However
* you cannot always rely on that
* Scintilla does violate the practice of using prefixes or namespaces.
The public APIs are OK, but it does define global functions/methods,
e.g. for "Document" that clashed with SciTECO's "TECODocument" class at
link-time.
Scintilla can put its definitions in a namespace, but this feature
cannot be easily enabled without patching Scintilla.
* a "SciTECO" namespace will be necessary if "SciTECO" is ever to be
turned into a library. Even if this library will have only a C-linkage
API, it must ensure it doesn't clutter the global namespace.
So the old "TECODocument" class was renamed back to "Document"
(SciTECO::Document).
|