Pt 2.1 - Creation your own Chaos System
Getting started
Welcome back! If you are reading this is because you liked the post: Chaos Based Encryption applied to stegoanalysis, although maybe it is more CBE Fundamentals haha, in any case before starting with this post I recommend you to take a look at the previous one to learn more about the concepts discussed.
n this little work you will learn how to create your own cbe system by applying some formula checks – like the Lyapunov exponent discussed in the first part – with a total of 3 checks, plus we will name some other proofs but they will not be deepened in this post.
Aspects to consider
For a chaotic system to be considered secure, it has to pass certain tests and requirements. First of all, here the 3 first tests we will deep in:
- Lyapunov Exponent
- Binary Tests (1-0 Test)
- Kolmogorov-Sinai
Then, we will talk about what mathemathics functions is recommendable to use and real examples of CBEs. Finally, the comparison between the dimensions of the system, the number of unknowns, will be discussed, having pros and cons each of them.
Verifying the Chaotic System
Lyapunov Exponent
Definition
From ScienceDirect:
A Lyapunov exponent is a measure of the chaotic nature of a system’s dynamics, indicating the divergence of nearby trajectories. It is computed using a practical formula that considers the time step and the distance between trajectories at different times.
From California Institute of Technology:
Lyapunov exponents tell us the rate of divergence of nearby trajectories—a key component of chaotic dynamics.
From Harvard Mathematics Department:
The Lyapunov exponent is a quantitative number which indicates the sensitive dependence on initial conditions. It measures the exponential rate at which errors grow. If the Lyapunov exponent is log |c| then you can expect an error cn after n iterations, if was the initial error.
How it works
Mathematically the lyapunov exponent is defined as
$$ \lambda = \lim_{ t \to \infty } \frac{1}{t} \ln\left( \frac{\delta x(t)}{\delta x(0)} \right) $$
mainly for 1 dimensional (1D onwards) systems, being $\delta x(t) = |x’(t) - x(t)|$.
The exponent measure the rate of divergence of trajectories on an attractor. Consider a flow $\vec{\phi}(t)$ in pahse space, given by $$ \frac{d\vec{\phi}}{dt} = \vec{F}(\vec{\phi}) $$
If instead of initiating the flow at $\vec{\phi}(0)$, it is initiated at $\vec{\phi}(0)+\epsilon (0)$, where $\epsilon$ is a differentiant in power -6 or lower, sensitivity to initial conditions would produce a divergent trajectory, as mentioned in Chaos Based Encryption applied to stegoanalysis. $_{https://ocw.mit.edu/courses/12-006j-nonlinear-dynamics-chaos-fall-2022/mit12_006jf22_lec24.pdf}$
The lyapunov exponent can be time-independent Jacobian, time-dependent eigenvalues but we are talking about how to create a chaotic system not about the lyapunov variantions.
If you are reading this in a future, here you have a more detailed paper about the lyapunov exponent.
Lyapunov calculator (1D)
With numpy python module is very easy to calculate this exponent… computing the average logarithmic rate of divergence between two trajectories of the system in base a sumatory. For each iteration, the separation is rescaled to remain close to $\epsilon$, preserving numerical stability.
def lyapunov(mapFunction, x0, eps, idx, discard=100):
x=x0
xPert = x0 + eps
lyap = 0.0
for i in range(idx):
x = mapFunction(x)
xPert = mapFunction(xPert)
delta = abs(xPert - x)
xPert = x + eps * (xPert - x) / delta
if i >= discard:
lyap += np.log(delta / eps)
ei = idx - discard
return (lyap / ei)Later, when we finish explaining all tests and you create your system, I will teach you how to use this code, anyways, here a brief example:
def system(k):
return lambda x: k * x * (1 - x)
logistic_map = system(4.0)
lyap1 = lyapunov(logistic_map, x0=0.2, eps=1e-8, idx=10000)Lyapunov calculator (3D)
Don’t worry about this, is only to set the code, in a future we will discuss diferences between dimensions in a chaotic system. In my case I work on 1D or 3D, you can adopt this code to another number of variables.
# Typing
ArrayLike3 = Sequence[Union[int, float]]
MapFunc = Callable[[ArrayLike3], Sequence[Union[int, float]]]
NormalizeFunc = Callable[[np.ndarray], np.ndarray]
def lyapunov_3d(
map_function: MapFunc,
x0: ArrayLike3,
eps: float = 1e-8,
idx: int = 10000,
discard: int = 100,
normalize: NormalizeFunc = lambda v: v
) -> float:
x = np.array(x0, dtype=np.float64)
rng = np.random.default_rng()
delta = rng.normal(size=3)
delta *= eps / np.linalg.norm(delta)
x_pert = x + delta
lyap_sum = 0.0
for i in range(idx):
# CORREGIDO: llamada sin '*'
x_next = normalize(np.array(map_function(x), dtype=np.float64))
x_pert_next = normalize(np.array(map_function(x_pert), dtype=np.float64))
diff = x_pert_next - x_next
dist = np.linalg.norm(diff)
if dist == 0:
diff = rng.uniform(-eps, eps, size=3)
dist = np.linalg.norm(diff)
diff *= (eps / dist)
x_pert = x_next + diff
x = x_next
if i >= discard:
lyap_sum += np.log(dist / eps)
return lyap_sum / max(1, (idx - discard))Brief example of how to use it:
def clamp01(v: np.ndarray) -> np.ndarray:
return np.clip(v, 0.0, 1.0)
def chaoticMap(xyz=[0, 1, 0], a, b, c):
return [dx, dy, dz]Binary Test (1-0 Test)
Definition
From arxiv.org:
The test distinguishes between regular and chaotic dynamics for a deterministic system. The nature of the dynamical system is irrelevant for the implementation of the test; it is applicable to data generated from maps, ordinary differential equations and partial differential equations.
From Hardvard; Despite it is from a magnetized spacetime we only want to know the definition:
The 0–1 binary test correlation method to distinguish between regular and chaotic dynamics of electrically neutral or charged particles. The correlation method is almost the same as the techniques of Poincaré map and fast Lyapunov indicators in identifying the regular and chaotic two cases.
From School of Mathematics & Statistics of Sydney:
The test is designed to distinguish between regular, i.e. periodic or quasi-periodic, dynamics and chaotic dynamics. It works directly with the time series and does not require any phase space reconstruction
How it works
The 0-1 Test for Chaos is designed to distinguish between chaotic and periodic systems directly from a time series, not like the lyapunov exponent that can be time-independient. Its works without any knowledge of the system’s equations.
In the regular case the trajectories for the system (1) are typically bounded,whereas in the chaotic case the trajectories for (1) typically behave approximately like a two-dimensional Brownian motion with zero drift and hence evolve diffusively
So given a scanalar observable $\phi$, define $\theta$ where $\phi_i \gets x_i$:
$$ \Theta_i = ct_i + \sum_{k=0}^{i} \phi_k \Delta t_k $$
Translating the variables is the main transformation, creating a projected trajectory in the plane, modulating the $\phi$ signal to a rotating phase — producing a 2D path
$$ p_i = \sum_{k=0}^{i} \phi_k \cos(\Theta_k) \Delta t_k $$
Then the mean-square displacement (MSD onwards) is used, in a random walk brownian motion, MSD grows linearly with $j$ making a bounded system a MSD bounded value, set to 0.
$$ M(j) =\frac{1}{N-j}\sum_{n=0}^{N-j-1}\Bigl(p_{n+j}-p_n\Bigr)^2 $$
Finally the slpoe of the line that fits $\ln M(j)$ vs. $\ln t_j$. If the system is chaotic $M(j)$ tends to $j$, approaching to 1, otherwise MSD will saturate and the slope will tend to 0, being a periodic and non-chaotic system.
Thus $K$ is the correlation slope in a simple lienar regression.
$$ K = \frac{\sum_j (x_j - \bar{x})(y_j - \bar{y})}{\sum_j (x_j - \bar{x})^2}, \quad x_j = \ln t_j, \quad y_j = \ln M(j) $$
1-0 Test Calculator (3D)
All the maths explained before will be reflected into the following python code:
import numpy as np
from scipy.integrate import cumulative_trapezoid as cumtrapz
def chaos_test(time, x, c=np.pi, plot=False):
# Phi = observable = x(t)
phi = x
# θ(t) = c·t + ∫ φ dt
integral_phi = cumtrapz(phi, time, initial=0)
theta = c * time + integral_phi
# P(t) = ∫ φ(t)·cos(θ(t)) dt
integrand = phi * np.cos(theta)
p = cumtrapz(integrand, time, initial=0)
# M(j) = ⟨ [p(n+j) – p(n)]² ⟩_n
n = len(p)
max_lag = n // 10
M = np.array([np.mean((p[j:] - p[:-j])**2) for j in range(1, max_lag)])
t_vals = time[1:max_lag]
log_M = np.log(M + 1e-16)
log_t = np.log(t_vals + 1e-16)
K, _ = np.polyfit(log_t, log_M, 1)By adding the following plot code after the definition of K, _ you will be able to see the graphic being more user-friendly:
import matplotlib.pyplot as plt
if plot:
plt.figure(figsize=(6,4))
plt.plot(log_t, log_M, '.', markersize=2)
plt.plot(log_t, K*log_t + np.polyfit(log_t, log_M, 1)[1],
label=f"K ≈ {K:.2f}")
plt.xlabel("log(t)")
plt.ylabel("log(M(t))")
plt.legend()
plt.title("0–1 Chaos Test")
plt.grid(True)
plt.tight_layout()
plt.show()We will use the Lorenz System to have a little demostration of the code:
def lorenz(t, state, sigma=10, rho=28, beta=8/3):
x, y, z = state
dxdt = sigma * (y - x)
dydt = x * (rho - z) - y
dzdt = x * y - beta * z
return [dxdt, dydt, dzdt]
if __name__ == "__main__":
N = 10000
t_span = (0, 50)
t_eval = np.linspace(*t_span, N)
init_state = [1.0, 1.0, 1.0]
sol = solve_ivp(lorenz, t_span, init_state, t_eval=t_eval)
time = sol.t
x_series = sol.y[0]
K = chaos_test(time, x_series, c=np.pi, plot=True)
print(f"[Lorenz System] Chaos indicator K = {K:.3f} " +
("=> Chaotic" if K > 0.7 else "=> Non-Chaotic (or weak)"))In this case $K$ tends to 1, being a chaotic system:

1-0 Test Calculator (1D)
Less common but also the same code for chaotic systems with one variable:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import cumulative_trapezoid as cumtrapz
def chaos_test(time, x, c=np.pi, plot=False):
phi = x
integral_phi = cumtrapz(phi, time, initial=0)
theta = c * time + integral_phi
integrand = phi * np.cos(theta)
p = cumtrapz(integrand, time, initial=0)
n = len(p)
max_lag = n // 10
M = np.array([np.mean((p[j:] - p[:-j])**2) for j in range(1, max_lag)])
t_vals = time[1:max_lag]
log_M = np.log(M + 1e-16)
log_t = np.log(t_vals + 1e-16)
K, _ = np.polyfit(log_t, log_M, 1)
if plot:
plt.figure(figsize=(6,4))
plt.plot(log_t, log_M, '.', markersize=2)
plt.plot(log_t, K*log_t + np.polyfit(log_t, log_M, 1)[1],
label=f"K ≈ {K:.2f}")
plt.xlabel("log(t)")
plt.ylabel("log(M(t))")
plt.legend()
plt.title("0–1 Chaos Test")
plt.grid(True)
plt.tight_layout()
plt.show()
return KRandom example for execution:
def logistic_map(r, x0, n_iter):
x = np.empty(n_iter)
x[0] = x0
for i in range(1, n_iter):
x[i] = r * x[i-1] * (1 - x[i-1])
return x
if __name__ == "__main__":
N = 10000
r = 4.0
x0 = 0.5
x_series = logistic_map(r, x0, N)
time = np.arange(N)
K = chaos_test(time, x_series, c=np.pi, plot=True)
print(f"[Logistic Map] Chaos indicator K = {K:.3f} " +
("=> Chaotic" if K > 0.7 else "=> Not chaotic (or weak)"))Kolmorogor Sinai
Definition
From Wolfram:
Kolmogorov entropy, also known as metric entropy, Kolmogorov-Sinai entropy, or KS entropy, is defined as follows. Divide phase space into D-dimensional hypercubes of content $\epsilon^D.$ Let $P_{i_0,…,i_n}$ be the probability that a trajectory is in hypercube $i_0$ at $t=0$, $i_1$ at $t=T$, $i_2$ at $t=2T$, etc.
From Harvard Theoretical Physics Department:
The Kolmogorov-Sinai entropy of the Ising model is calculated with a coupled map lattice model. The KS entropy indicates a mixing rate in the equilibrium state. The KS entropy exhibits a similar type of singularity to the Boltzmann entropy at the critical point.
From ProofWiki:
Let $(X, \beta, \mu)$ be a probability space
Let $T:X \to X$ be a $\mu$-preserving transformation
Then the Kolmogorov-Siai entropy of $T$ is defined as: \$h(T) := sup${$h(T,A):A$ finite sub-$\sigma$-alebra of $\beta$} \
where:
$h (T,A)$ denotes the entropy of $T$ with respecto to $A$
How it works
In base from scholarpedia, consider dynamical system with discrete time to give the definition of the entropy of dynamical systems. The phase space of dynamical system is denoted by $M$. It is equipped with $\sigma$-algebra $\Mu$ and a probability measure $\mu$ defined on $\Mu$.
Take a finite partition $\xi = ${$C_1,\cdots,C_r$} of $M$. Follow the system trajectory and for each step check the value at which it is set, creating a realization of randoms process called: $\omega(x)$:
$$ \omega(x) = [\cdots \omega_{-n}(x),\cdots, \omega_{0}(x), \omega_{1}(x), \cdots, \omega_{m}(x) \cdots] $$
Then define the chunks via the lenght $n$:
$$ P(i_1,\cdots,i_n) = -mu (T^{-1}C_1,\cap \cdots \cap T^{-n} C_{i_n}) $$
With that the entropy can be set:
$$ h(T,\xi) = -\lim_{ n \to \infty } \frac{{1}}{n} \sum_{i_{1},\cdots,i_{n}} P(i_{1},\cdots,i_{n}) \ln P(i_{1},\cdots,i_{n}) $$
Finally:
$$ h(T) = \sup_{\xi} h(T, \xi) $$
Where sup is taken over all finite partitions $\xi$.
It is clear from the definition that this entropy is a metric invariant of dynamical system. The following theorem is the main tool which allows to compute $h(T)$ . It uses the notion of a generating partition.
Kolmorogor Calculator (1D)
In this case, the program flow is splitted into 3 functions, each one will have a comment to know what mathematical part is:
import numpy as np
from collections import Counter
# Generate the x_n srie from the logistic map
def generateSeries(x0, r, N):
series = np.empty(N)
x = x0
for i in range(N):
series[i] = x
x = logicMap(x, r)
return series
# Symbolic partition
def symbolSecuence(series, num_bins):
bins = np.linspace(series.min(), series.max(), num_bins + 1)
symbols = np.digitize(series, bins) - 1
symbols[symbols == num_bins] = num_bins - 1
return symbols
# Block Entropy
def blockEntropy(symbols, block_len):
counts = Counter()
total = len(symbols) - block_len + 1
for i in range(total):
block = tuple(symbols[i:i+block_len])
counts[block] += 1
H = 0.0
for count in counts.values():
p = count / total
H -= p * np.log(p)
return HThen you will need a for-loop to complete the flow program:
entropies = []
h_estimates = []
for m in range(1, max_block + 1):
Hm = blockEntropy(symbols, m)
entropies.append(Hm)
if m > 1:
h_estimates.append(entropies[-1] - entropies[-2])Being series = generateSeries(), logicMap your logistic map as $r \times x \times (1-x)$ where $r \equiv 4.0$ and symbols the symbolSecuence from the series. Here an example:
x0 = 0.3
r = 4.0
N = 50000
num_bins = 4
max_block = 6
series = generateSeries(x0, r, N)
symbols = symbolSecuence(series, num_bins)
entropies = []
h_estimates = []
for m in range(1, max_block + 1):
Hm = blockEntropy(symbols, m)
entropies.append(Hm)
if m > 1:
h_estimates.append(entropies[-1] - entropies[-2])
print("H(m):", entropies)
print("h(m) [aprox. KS]:", h_estimates)Always you can always use a matplot viewer to make it more user-friendly.
import matplotlib.pyplot as plt
plt.figure()
plt.plot(range(1, max_block+1), entropies, 'o-')
plt.xlabel('Block len(m)')
plt.ylabel('Entropy H(m)')
plt.title('Logistic Map Block Entropy')
plt.grid(True)
plt.show()
plt.figure()
plt.plot(range(2, max_block+1), h_estimates, 'o-')
plt.xlabel('Block len(m)')
plt.ylabel('H(m) Rate')
plt.title('KS Entropy')
plt.grid(True)
plt.show()Extra Python Tip
Instead of creating the function generateSeries, you can implement our decorator to create a simpler code. There will be more extra decorators in our PyPi module.
@generateSeries
def logisticMap(x, r=4.0):
return r * x * (1 - x)Python code for @generateSeries:
import numpy as np
from functools import wraps
def generateSeries(f):
@wraps(f)
def wrapper(x0, *args, N=1000):
series = np.empty(N)
x = x0
for i in range(N):
series[i] = x
x = f(x, *args)
return series
return wrapperPD: Careful! If you are using the decorator it is recomendable to use typing in your code: series = logisticMap(x0=0.3,r=4.0,N=50000)

Kolmorogor Calculator (3D)
The only changes in the Python code is the generateSeries code, where takes the logistic map as an argument:
def generateSeries(x0, y0, z0, r, N, a=0.01, b=0.01, c=0.01, d=0.01, e=0.01, f=0.01):
series = np.empty((N, 3))
xyz = np.array([x0, y0, z0])
for i in range(N):
series[i] = xyz
xyz = logisticMap3D(xyz, r, a, b, c, d, e, f)
return seriesIn this case we will use the following logistic 3D map:
def logisticMap3D(xyz, r=3.8, a=0.01, b=0.01, c=0.01, d=0.01, e=0.01, f=0.01):
x, y, z = xyz
x_next = r * x * (1 - x) + a * y + b * z
y_next = r * y * (1 - y) + c * z + d * x
z_next = r * z * (1 - z) + e * x + f * y
return np.array([x_next, y_next, z_next])or in maths:
$$ x_{n+1} = r \cdot x_n \cdot (1 - x_n) + a \cdot y_n + b \cdot z_n $$$$ y_{n+1} = r \cdot y_n \cdot (1 - y_n) + c \cdot z_n + d \cdot x_n $$$$ z_{n+1} = r \cdot z_n \cdot (1 - z_n) + e \cdot x_n + f \cdot y_n $$
Decorator: @generate3DSeries
import numpy as np
from functools import wraps
def generate3DSeries(f):
@wraps(f)
def wrapper(x0, y0, z0, *args, N=1000, **kwargs):
series = np.empty((N, 3))
xyz = np.array([x0, y0, z0])
for i in range(N):
series[i] = xyz
xyz = f(xyz, *args, **kwargs)
return series
return wrapperDiferencies between 1D and 3D
In this more theoretical part, as its name indicates, we will talk about the differences and characteristics of chaotic systems depending on their dimensions.
| Feature | 1D Systems | 3D Systems |
|---|---|---|
| Variables | Scales linearly with the number of dimensions | |
| Complexity | Simple | Complex |
| Attractors | Fixed, Periodic | Strange, Fractal |
| Tendency to Chaos | $<$ | |
| Interactions | Self-feedback | Between Incognitas |
For systems with $N$ numbers of dimensions will be proportional, i.e. the greater the number of variables, the greater the system’s tendency toward chaos. Always 1D Maps are more predictable and easier to analyze than 2D or 3D Maps, thus when you are creating your own system, you have to find a balance between complexity and efficiency.
A chaotic system can be considered as a hyperchaos system when gets more than one postive Lyapunov exponent, this becomes more accessible as the number of variables in the map increases.
As a fun fact, 3D systems are usually used for artistic design, study of fractals and spirals.
Or
Creating Chaotic Systems
Once we know the differences and what tests must be passed we start with the mathematical keys for the creation of the systems. I want to remember that the main feature from a chaotic system is the sensibility while processing various inputs, when creating the system we need to take precaution with floating-point rounding errors (IEE 754).
Before mention some math-functions remember, one possible (under study in July 2025) advantage for CBE from others ciphers such as AES Standards are the useless libcmath and libgcrypt libraries in the code being less heuristic-detected by EDR (Endpoint Detection and Response) and AV (Anti-Virus), therefor, calculations with such complex mathematical functions or even a strange implementation of the xor logic gate may be frowned upon by one of the following.
Differences between simmetric and asymmetrical systems won’t be touch, only clear the variable haha (you have a lot of tools nowadays)
Even so, there is no magic formula to create a chaotic system, or at least that I know, simply by trial and error seeing other systems that will be shown, mixing and with a little creativity we will be able to create a system that meets the above seen.
Mathematics used in CBE
Arithmetic Functions
On their own they may not be very powerful but they are very useful to adjust unbalances in initial states that can be seen as a sum of $\epsilon$ within natural logarithms, nepetiaros, of any base.
Example:
$k = \ln(k_n - \mu + \epsilon)$
Moreover, they can also pass as harmless calculations when encrypting shellcodes in memory in view of EDR/AV. One problem you may encounter with these systems is that they are linear and on their own they cost to introduce chaos, if abused they can end up with no real divergence, (for + and -).
When we say that a system “moves without real divergence”, we mean that it evolves with time, changing its values, but without real chaotic behavior. This means that there is no exponential sensitivity to initial conditions, which characterizes a chaotic system useful for encryption.
All in all, the excesive use of multiplication or division could be into:
OverflowError: math range errorZeroDivisionError: division by zeromath domain errornan value
Logistic map
Here an 1D map example using only arithmetic functions:
$$ x_{n+1} = rx_n (1-x_n) $$
Range of chaos: $r \in [3.54, 4]$
Tent Map
$$ x_{n+1} = \begin{cases} \mu x_n & \text{if } x_n < \frac{1}{2} \\ \mu (1 - x_n) & \text{if } x_n \geq \frac{1}{2} \end{cases} $$
Range of chaos: $\mu \in (1, 2]$
Algebraic functions
I was hesitating whether to put them together or not, but the more structured and simple the post, the easier it will be for readers starting in this world.
The use of polynomial, radical and rational functions is useful for a more accurate control of the systems. The polynomials, being nonlinear, are recommended for chaos base generators, use in PRNGs and hashes, even though they have a lot of dependence in their monomials. An example can be any 1D a bit extensed: $x_{n+1} = rx_n (1-x_n) + (x_{n-1}) \times 2$
In the case of roots they can modulate the growth, avoiding mentioned errors like the OverflowError or Math domain error, a big disadvantage is the restricted domain but for relaunching chaos it can be useful, similar problem seen in logarithms.
Zaslavsky map variant
Anyways squares are not very common on chaotic maps, they can block the posibility of being hyper-chaotic.
$$ x_{n+1} = x_n + \epsilon \cdot \sqrt{|y_n|} + k \cdot \sin(x_n) \\ y_{n+1} = y_n - \epsilon \cdot \sqrt{|x_{n+1}|} $$
Range of chaos: $\epsilon = 0.1$ and $k = 1.5$
Transcendent functions
Sines, cosines, tangents; general trigonometric functions, logarithmic and exponential functions are grouped in this subset of functions.
Trigonometric functions
Non-linear and preiodic functiones, introducing smooth and sharp trnasitions in output values with a small variation in the input can result in large seemingly random changes, ideal for hashing and key diffusion, thus promoting the notion of chaos.
Proper way to bound output creating modulo-based systems: $\sin(x) \in [-1,1]$. Not everything is rosy, and it can lead to various problems. With risk to take cyclic periods, create vertical asymptotes and numerical rounding in floating point mentioned above.
When using this functions the presence of external libraries such as math.c are needed creating some suspicius calculations in front of EDR/AV.
Henon-Sine Map (2D-HSM)
$$ \begin{cases} x_{n+1} = (1-a \sin{x_n} + y_n) & mod 1 \\ y_{n+1} = b x_n & mod 1 \end{cases} $$
Range of chaos: $a \in (- \infty, -0.71] \cup [0.71, \infty)$ and $b = 0.7$
Logistic-sine-cosine (LSC) map
$$ x_{n+1} = \cos(\pi (4r x_n (1-x_n) + (1-r) \sin(\pi x_n) -0.5)) $$
Range of Chaos: $r \in [0,1]$
Logarithmic functions
Introduces asymmetry and rapid cahnge in low-value inputs, excellent for destabilizing lienar trends or amplifying small diffrences, increasing divergence with a non-periodic behavior beraking again symmetry.
A cons is the undefined for $x \leq 0$, thus ensure proper input constrains: $\log(|x|+ \epsilon)$
To be honest, in traditional chaos theory there are no logarithms in the first formulas, however, the following map is used for generating keys for chaotic encryption in images and signals.
It is very sensitive to numeric numbers when $x \approx 0$
Logistic + Log
$$ x_{n+1} = r * x_n * (1 - x_n) + s * log(|x_n| + \epsilon) $$
Exponential functions
Mainly strong non-linearity, having a fast divergence, making a higher outout in the Lyapunov exponent Test and can make a high entropy generation in comparation with others methods.
Most common problem is the rapid saturation by exponential growth. Use normalization/modulo to avoid this problem
1D Sine-powered chaotic map (1DSP)
$$ x_{n+1} = (x_n (\alpha + 1))^{\sin(\beta \pi + x_n)} $$
Range of Chaos: $\alpha > 0$ and $\beta \in [0,1]$
Conclusion
In this post we have mention a brief explanation of how to create a chaotic map, searching for some specific characteristics and real-world examples to learn from the experience of others.
As we said above, there is no a magic formula for creating a chaotic system but you will be able to create one in base of others, so here you have some usefull resource to have a list of chaotic systems.
Goodluck creating your map! Remember what is the exact pruporse of the system to be able to create it!
Real-world systems:
- https://www.iosrjournals.org/iosr-jm/papers/Vol18-issue4/Ser-1/D1804013239.pdf
- https://www.researchgate.net/figure/List-of-chaotic-maps-and-their-corresponding-number-of-dimensions-number-of-parameters_tbl2_353476026
- https://github.com/mehransab101/Chaotic-Maps
- https://csc.ucdavis.edu/~chaos/courses/poci/Readings/ch2.pdf
- https://handwiki.org/wiki/List_of_chaotic_maps