Compare commits

...

142 Commits

Author SHA1 Message Date
32ef695c87 master > master: minor fixes for windows 2022-08-29 19:04:03 +02:00
1ee5e7415b master > master: protokoll - Markdown 2022-07-14 14:36:54 +02:00
b1a467b2e4 master > master: protokoll - Markdown 2022-07-14 14:30:13 +02:00
9f807d2a21 master > master: berechnung 2022-07-14 11:58:42 +02:00
de4bd64e77 master > master: protokolle - wochen 14+15 2022-07-14 11:55:44 +02:00
0c61a29375 master > master: code py - VERSION up (patch) 2022-07-01 22:30:21 +02:00
bd355098cc master > master: code py - docs generated 2022-07-01 22:29:58 +02:00
278e3713c8 master > master: code py - config mit Bsp. von euklid + pollard 2022-07-01 13:49:59 +02:00
de238fede9 master > master: code py - pollard rho mit 2 modi 2022-07-01 13:49:33 +02:00
3c965eda7b master > master: protokoll - woche 13 2022-06-30 13:58:35 +02:00
56c22a568c master > master: protokoll - links 2022-06-30 09:06:40 +02:00
c1f346b80e master > master: protokolle wochen 11–13 2022-06-30 06:41:54 +02:00
RD
2d96666bec Merge pull request 'woche12 ---> master' (#3) from woche12 into master
Reviewed-on: #3
2022-06-30 06:25:33 +02:00
7d07f4317e woche12 > master: code py - pollards rho mit log-wachstum für y 2022-06-30 06:24:10 +02:00
2bd07544f3 woche12 > master: code py - random walks ergänzt
- stopkriterien
- logging
2022-06-30 05:44:16 +02:00
7b456d177e woche12 > master: code py - logging für random walk 2022-06-30 05:43:10 +02:00
1e934dc3ef woche12 > master: code py - thirdparty imports für mathe+plots 2022-06-30 05:42:54 +02:00
a7c7179edb woche12 > master: code py - rohe implementierung der walks 2022-06-21 19:02:59 +02:00
5c43419890 woche12 > master: code py - imports von random methoden 2022-06-21 19:02:41 +02:00
c2cb11a141 woche12 > master: code py - vorberechnungen gemäß modell 2022-06-21 19:02:05 +02:00
01ef8c5758 woche12 > master: code py - schema 2022-06-21 19:01:13 +02:00
4001551c9c woche12 > master: code py - leere EPs für walks + genetic hinzugefügt (stubs) 2022-06-21 17:25:39 +02:00
17711327ef woche12 > master: code py - config ergänzt 2022-06-21 17:24:40 +02:00
3d05f7ae1d woche12 > master: code py - schemata für walks + genetic 2022-06-21 17:24:30 +02:00
aaa0b7a124 woche12 > master: code py - documentation 2022-06-20 17:33:29 +02:00
48c47f61b7 woche12 > master: code py - VERSION up 2022-06-20 17:33:20 +02:00
ad354b3b64 woche12 > master: code py - assets für pollards rho mit x-init 2022-06-20 17:24:37 +02:00
1b73ec263b woche12 > master: code py - schema für pollards rho mit x-init 2022-06-20 17:24:28 +02:00
15fe1b04d4 woche12 > master: code py - pollards rho implementiert 2022-06-20 17:24:00 +02:00
f6401f0dfc woche12 > master: code py - assets 2022-06-20 16:49:46 +02:00
f1200dfc25 woche12 > master: code py - euklid alg implementiert 2022-06-20 16:46:35 +02:00
f877ffc9e7 woche12 > master: code py - ep angelegt (stubs) 2022-06-20 15:56:17 +02:00
ac119a0b29 woche12 > master: schemata - neue commands 2022-06-20 15:55:43 +02:00
8cba2fdf13 master > master: code py - display volle loesung statt padding 2022-06-17 08:04:27 +02:00
3032840a1d master > master: code py - display mit leerzeichen um + 2022-06-16 22:39:10 +02:00
48fb136436 master > master: code py - darstellung alignment von summen 2022-06-16 22:27:29 +02:00
efacd73e51 master > master: code py - darstellung
- greedy permutation in Tabelle musste invertiert werden
- Aktualisierung der bound musste beim Loggin erscheinen werden
- value/cost zwecks leichter Vergleichbarkeit als Dezimalzahlen darstellen
2022-06-16 12:53:31 +02:00
ba394993e0 master > master: code py - assets korrigiert 2022-06-15 16:17:06 +02:00
059f9d8742 master > master: protokolle - wochen 10+11 2022-06-15 15:58:30 +02:00
21f61d71c3 master > master: code py - VERSION up 2022-06-15 15:57:43 +02:00
f3db0660f2 master > master: code py - documentation gebaut 2022-06-15 15:57:20 +02:00
77b2f40215 master > master: code py - algorithmus angepasst:
- korrekte behandlung von Permutationen
- hervorhebung von Summanden
- Spalte mit Infos über Moves
- optionen, um alle Gewichte zu zeigen / alle Summen zu zeigen
2022-06-15 15:56:43 +02:00
4cc4410c19 master > master: code py - schemata aktualisiert 2022-06-15 15:54:52 +02:00
3791220cee master > master: code py - fractional Werte + Sortierung in Greedy-Summen 2022-06-14 20:02:22 +02:00
c6149c230a master > master: code py - utils, rel perm 2022-06-14 20:01:48 +02:00
3b8f80cff9 master > master: code py - verbesserte Darstellung + »korrekte« Behandlung von Reihenfolgen
- im Kurs wird die Permutation nur für Greedy-Berechnungen angewandt
- die Reihenfolge der Items in der Hauptberechnung bei B&B bleibt wie bei Angaben
2022-06-14 14:40:02 +02:00
e3c3bbec37 master > master: code py - korrigierte logik 2022-06-14 12:19:35 +02:00
f45781be71 master > master: code py - pad ones/zeros für einelementige Fälle 2022-06-14 11:24:32 +02:00
56a93bbac9 master > master: code py - commands minor korrektur 2022-06-14 10:09:50 +02:00
a93b59539f master > master: code py - schema minor korrektur 2022-06-14 10:09:33 +02:00
304b8315f3 master > master: code py - bei output Sortierung rückgängig machen 2022-06-14 10:09:21 +02:00
828000a2ac master > master: code py - display für Sortierungsschritt 2022-06-14 09:03:29 +02:00
026cd6addf master > master: code py - display verbessert 2022-06-14 01:53:48 +02:00
ea36c82728 master > master: code py - algorithmen für rucksackproblem 2022-06-14 01:35:10 +02:00
7cfaf253b3 master > master: code py - schemata für rucksack 2022-06-14 01:34:44 +02:00
d79b10e190 master > master: READMEs angepasst 2022-06-12 10:32:25 +02:00
8e59bc941f master > master: code py - scripts, doc-building
- `just build-documentation` vom `just build` Befehl jetzt getrennt
- wird nur mit `just dist` Befehl ausgeführt
2022-06-12 10:27:57 +02:00
RD
2a5986d490 Merge pull request 'dev ---> master: documentation von datenmodellen' (#1) from dev into master
Reviewed-on: #1
2022-06-11 16:10:44 +02:00
036c87f829 dev > master: code py - documentation erzeugt 2022-06-11 16:08:08 +02:00
4fa02a4962 master > master: code py - schemata überarbeitet 2022-06-11 16:07:23 +02:00
38b477e0ad master > master: code py - documentation generation 2022-06-11 16:07:09 +02:00
8acb2157ab master > master: code py - minor 2022-06-11 14:07:00 +02:00
d454f71bfa master > master: code py - unit tests aktualisiert 2022-06-11 14:02:09 +02:00
17ea04cfee master > master: code py - config schema aktualisiert 2022-06-11 14:01:24 +02:00
301a9c87be master > master: code py - refactored code mit Endpunkten 2022-06-11 14:00:45 +02:00
d8f6c802b2 master > master: code py - config schema defaults 2022-06-10 16:57:44 +02:00
99a194dfc8 master > master: code py - asset für testfälle aufgeräumt 2022-06-10 16:53:16 +02:00
670fd1b73e master > master: code py - Tarjan / Tabellenspalten umgetauscht 2022-06-10 16:38:42 +02:00
c0bc69450c master > master: code py - fügte tarjan im Hauptzyklus hinzu 2022-06-10 16:32:15 +02:00
3e8b3c157d master > master: code py - verb -> verbose 2022-06-10 16:04:45 +02:00
a83315e3e6 master > master: code py - fügte tarjan api hinzu 2022-06-10 16:04:30 +02:00
6920944cdf master > master: code py - syntaxfehler 2022-06-10 12:46:53 +02:00
90c48a85f9 master > master: code py - requirements für model-gen 2022-06-10 12:45:36 +02:00
92057b1882 master > master: code py - cleanup von optionalen aspekten 2022-06-10 12:44:04 +02:00
0145f1f873 master > master: code py - config optionen für richtungsprios 2022-06-10 12:40:55 +02:00
b4a5ee213c master > master: code py - refactoring 2022-06-10 12:28:00 +02:00
0116e2fbfb master > master: code py - matrix natürlicher aufschreiben 2022-06-10 12:11:14 +02:00
97295b71cd master > master: code-py - refactoring von config/models
- manche command-optionen wie verbosity nach config.yaml umgezogen.
2022-06-10 12:08:31 +02:00
0523c68100 master > master: code py - models + config implementiert 2022-06-10 11:51:05 +02:00
67f6caf2d5 master > master: code py - tree darstellung einheitlicher 2022-06-10 09:06:27 +02:00
969a880440 master > master: code py - bessere Baum-Ausgabe 2022-06-09 18:33:58 +02:00
61e49f19e0 master > master: code py - bessere Darstellung für moves 2022-06-09 18:16:07 +02:00
760bff11f2 master > master: code py - refactoring
- umbenennungen
- verbesseungen der Darstellungen
- enums zur beseren Steuerung der versch. Modi
- refactoring des Algorithmus
- Verwendung von tabulator
2022-06-09 18:13:42 +02:00
57bc1e68e6 master > master: code py - README (make -> just) 2022-06-09 18:12:35 +02:00
8111b8ef07 master > master: code py - setup 2022-06-09 18:12:24 +02:00
67aa70edfa master > master: code py - verbose/display mode als enum 2022-06-09 15:14:03 +02:00
b79cc24bc4 master > master: code py - Trennwand reduziert 2022-06-09 15:03:28 +02:00
a536d16c1d master > master: code py - requirements kompakteres Display 2022-06-09 15:00:06 +02:00
61841a5368 master > master: code py - third party 2022-06-09 14:55:08 +02:00
0a7b1dc9bc master > master: code py - requirements 2022-06-09 14:55:00 +02:00
954becea26 master > master: code py - minor 2022-06-09 10:45:35 +02:00
9c5b88b64d master > master: code py - hirschberg darstellungen verbessert 2022-06-09 08:48:09 +02:00
14a882e9d3 master > master: code py - hirschberg 2022-06-09 01:51:08 +02:00
53b2066e0d master > master: code py - tsp 2022-06-09 01:50:57 +02:00
07cf57eeab master > master: code py - imports 2022-06-09 01:50:36 +02:00
96bb225978 master > master: .gitignore 2022-06-02 11:42:59 +02:00
00e5432b6c master > master: tsp 2022-06-02 11:42:42 +02:00
5e92cd1fd4 master > master: README - formatierung 2022-04-19 09:05:31 +02:00
cc9cb01c46 master > master: makefile - lösche binäre Datei nicht 2022-04-19 09:02:32 +02:00
9f0263a23f master > master: README - minor 2022-04-19 09:02:10 +02:00
82bf3c12b0 master > master: README - python init 2022-04-19 09:02:04 +02:00
274f633f2c master > master: code-py - unittests 2022-04-18 19:16:28 +02:00
34c7d9d0c4 master > master: code-py - unittest config 2022-04-18 19:04:55 +02:00
04cdeb4e18 master > master: code-py - tarjan + stacks + logging 2022-04-18 19:04:42 +02:00
2ecf52fe3d master > master: code-py - gitignore 2022-04-18 19:03:52 +02:00
a3538095ba master > master: code-py - requirements 2022-04-18 19:03:44 +02:00
5be654e5db master > master: code-rs - minor 2022-04-18 19:03:28 +02:00
dc831a91c7 master > master: code - makefiles korrigiert + unittest 2022-04-18 19:03:01 +02:00
d5a79e77ce master > master: code-rust - algorithmus sauberer + Kommentare 2022-04-14 12:26:53 +02:00
859a779ba5 master > master: code-rust - minor 2022-04-14 12:03:00 +02:00
b6cea5920f master > master: protokoll - woche2 2022-04-14 12:01:44 +02:00
b30de02973 master > master: code-rust - schönheits op für algorithmus 2022-04-14 11:57:41 +02:00
5ce805c543 master > master: code-rust - .contains methode für stacks hinzugefügt 2022-04-14 11:55:31 +02:00
07b214bc24 master > master: code-rust - util min durch macro ersetzt 2022-04-14 11:54:49 +02:00
37f05d1ff0 master > master: code-rust - debug logging auslagern 2022-04-14 05:16:53 +02:00
b377120ea8 master > master: core-rust - export für alle log-methoden 2022-04-14 05:12:33 +02:00
c923604b59 master > master: cargo-rust - entferne nicht gebrauchte imports 2022-04-14 05:08:58 +02:00
fa732f68b2 master > master: code-rust - bsp in main geändert 2022-04-14 05:02:10 +02:00
c17ba0fefb master > master: code-rust - debugging im Alg 2022-04-14 05:01:47 +02:00
d0099eb7f9 master > master: code-rust - log 2022-04-14 05:01:08 +02:00
f192559eca master > master: code-rust - korrigierte fixture 2022-04-13 23:32:19 +02:00
6da791d7bb master > master: code-rust - Bsp hinzugefügt 2022-04-13 23:13:14 +02:00
4b2c73b9f8 master > master: README 2022-04-13 23:12:37 +02:00
e667051a81 master > master: code-rust - README 2022-04-10 15:59:37 +02:00
279677fb7c master > master: code-rust - String-type macro 2022-04-10 15:42:19 +02:00
5f5165dca8 master > master: code-rust - verwendung von assertion rules in unit-tests 2022-04-10 10:35:35 +02:00
5da0084a6d master > master: code-rust - fügte assertion rules hinzu 2022-04-10 10:35:18 +02:00
cadd869155 master > master: code-rust - unit tests für gph + tarjan 2022-04-08 23:36:29 +02:00
0d8583dd0c master > master: code-rust - utils für vec -> set conv 2022-04-08 23:35:30 +02:00
cc297011b2 master > master: code-rust - cargo 2022-04-08 23:35:04 +02:00
17723e71a7 master > master: code-rust - makefile 2022-04-08 17:08:28 +02:00
85964de351 master > master: code-rust - basic unit tests hinzugefügt 2022-04-08 17:07:36 +02:00
2f4032b64a master > master: code-rust - minor 2022-04-08 17:07:03 +02:00
2f600e01e3 master > master: code-rust - bsp 2022-04-08 07:27:10 +02:00
0dcc97cd6d master > master: code-rust - tarjan alg 2022-04-08 07:27:04 +02:00
b85574cda4 master > master: code-rust - init 2022-04-08 07:26:56 +02:00
3338b255a1 master > master: src-py bsp 2022-04-07 16:41:33 +02:00
cd265e4ee9 master > master: src-py tarjan alg 2022-04-07 16:41:26 +02:00
4b4634994c master > master: src-py - init 2022-04-07 16:41:15 +02:00
15954963ca master > master: protokoll - woche 1 2022-04-07 16:40:59 +02:00
f41ec73a05 master > master: src - rust 2022-03-30 18:00:21 +02:00
99b8bbd6a7 master > master: src - py 2022-03-30 18:00:11 +02:00
183 changed files with 8713 additions and 14 deletions

2
.gitignore vendored
View File

@ -15,8 +15,6 @@
################################################################
!/notes
!/notes/glossar.md
!/notes/quellen.md
!/protocol
!/protocol/README.md

View File

@ -21,6 +21,8 @@ Siehe Moodle!
## Code ##
Im Unterordner [`code/rust`](./code/rust)
(und evtl. [`code/python`](./code/python))
werden ggf. Implementierungen von den Algorithmen zu finden sein.
In den Unterordnern
[`code/rust`](./code/rust)
und
[`code/python`](./code/python) (etwas ausführlicher)
sind Implementierungen von einigen Algorithmen bzw. Datenstrukturen zu finden sein.

17
code/python/.coveragerc Normal file
View File

@ -0,0 +1,17 @@
[run]
source="."
[report]
show_missing = true
omit =
# ignore tests folder
tests/*
# ignore thirdparty imports
src/thirdparty/*
# ignore __init__ files (only used for exports)
**/__init__.py
# ignore main.py
main.py
precision = 0
exclude_lines =
pragma: no cover

4
code/python/.env Normal file
View File

@ -0,0 +1,4 @@
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Environment variables
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
APPNAME=ads2

62
code/python/.gitignore vendored Normal file
View File

@ -0,0 +1,62 @@
*
!/.gitignore
################################################################
# MAIN FOLDER
################################################################
!/.env
!/justfile
!/.coveragerc
!/README.md
!/LICENSE
!/requirements.txt
!/pyproject.toml
################################################################
# PROJECT FILES
################################################################
!/src
!/src/**/
!/src/**/*.py
!/main.py
!/models
!/models/*-schema.yaml
!/models/README.md
!/docs
!/docs/*/
!/docs/*/Models/
!/docs/**/*.md
!/tests
!/tests/**/
!/tests/**/*.py
!/assets
!/assets/*.yaml
!/dist
!/dist/VERSION
################################################################
# AUXLIARY
################################################################
/logs
################################################################
# ARTEFACTS
################################################################
/**/__pycache__
/**/.DS_Store
/**/__archive__*
################################################################
# Git Keep
################################################################
!/**/.gitkeep

0
code/python/LICENSE Normal file
View File

43
code/python/README.md Normal file
View File

@ -0,0 +1,43 @@
# ADS2 - Implementierung in Python #
Im Ordner [./src/*](src/) findet Module mit Datenstrukturen und Algorithmen.
Im Ordner [./tests/*](tests/) findet man _unit tests_,
die die verschiedenen Datenstrukturen und Algorithmen mit Testfälle belasten.
Man kann auch direkt im Code von [./src/main.rs](src/main.rs) aus
die Methoden mit Daten ausprobieren.
## Voraussetzungen ##
1. Der Python-Compiler **`^3.10.*`** wird benötigt.
2. Das **`justfile`**-Tool wird benötigt (siehe <https://github.com/casey/just#installation>).
## Build -> Test -> Run ##
In einem IDE in dem Repo zu diesem Ordner navigieren.
</br>
Eine bash-Konsole aufmachen und folgende Befehle ausführen:
```bash
# Zeige alle Befehle:
just
# Zur Installation der Requirements (nur und immer nach Änderungen nötig):
just build
# Zur Ausführung der unit tests:
just tests
# Zur Ausführung des Programms
just run
# Zur Bereinigung aller Artefakte
just clean
```
Man kann auch mit einem guten Editor/IDE die Tests einzeln ausführen.
## Testfälle durch Config-Datei ##
Statt den Code immer anfassen zu müssen, kann man Fälle für die verschiedenen Algorithmen
in der **[./assets/commands.yaml](assets/commands.yaml)** erfassen und (mittels `just run`)
das Programm ausführen.
Weitere globale Einstellungen (z. B. über Verbosity, Penalty-Konstanten, usw.) kann man in
**[./assets/config.yaml](assets/config.yaml)** einstellen.

View File

@ -0,0 +1,200 @@
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# NOTE:
# Diese Datei enthält Angaben für konkrete Fälle
# für die zu demonstrierenden Algorithmen.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Beispiele für Seminarwoche 2 (Blatt 1)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- name: TARJAN
nodes: [a,b,c]
edges: [[a, c], [c, a], [b, c]]
- name: TARJAN
nodes: [1, 2, 3, 4, 5, 6, 7, 8]
edges: [
[1, 2],
[1, 3],
[2, 4],
[2, 5],
[3, 5],
[3, 6],
[3, 8],
[4, 5],
[4, 7],
[5, 1],
[5, 8],
[6, 8],
[7, 8],
[8, 6],
]
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Beispiele für Seminarwoche 9 (Blatt 8)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- name: TSP
dist: &ref_dist [
[0, 7, 4, 3],
[7, 0, 5, 6],
[2, 5, 0, 5],
[2, 7, 4, 0],
]
optimise: MIN
- name: TSP
dist: *ref_dist
optimise: MAX
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Beispiele für Seminarwoche 10 (Blatt 9)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- name: HIRSCHBERG
word1: 'happily ever after'
word2: 'apples'
once: false
- name: HIRSCHBERG
word1: 'happily'
word2: 'applses'
once: false
- name: HIRSCHBERG
word1: 'happily ever, lol'
word2: 'apple'
once: false
- name: HIRSCHBERG
word1: 'ACGAAG'
word2: 'AGAT'
once: false
- name: HIRSCHBERG
word1: 'ANSTRENGEN'
word2: 'ANSPANNEN'
once: false
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Beispiele für Seminarwoche 11 (Blatt 10)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- name: RUCKSACK
algorithm: GREEDY
allow-fractional: true
# allow-fractional: false
max-cost: 10
items: [a, b, c, d, e]
costs:
[3, 4, 5, 2, 1]
values:
[8, 7, 8, 3, 2]
- name: RUCKSACK
algorithm: BRANCH-AND-BOUND
max-cost: 10
items: [a, b, c, d, e]
costs: [3, 4, 5, 2, 1]
values: [8, 7, 8, 3, 2]
- name: RUCKSACK
algorithm: BRANCH-AND-BOUND
max-cost: 460
items: [
'Lakritze',
'Esspapier',
'Gummibärchen',
'Schokolade',
'Apfelringe',
]
costs: [220, 80, 140, 90, 100]
values: [100, 10, 70, 80, 100]
- name: RUCKSACK
algorithm: BRANCH-AND-BOUND
max-cost: 90
items: [
'Sonnenblumenkerne',
'Buchweizen',
'Rote Beete',
'Hirse',
'Sellerie',
]
costs: [30, 10, 50, 10, 80]
values: [17, 14, 17, 5, 25]
- name: RUCKSACK
algorithm: BRANCH-AND-BOUND
max-cost: 900
items: [
'Sellerie',
'Sonnenblumenkerne',
'Rote Beete',
'Hirse',
'Buchweizen',
]
costs: [600, 100, 800, 100, 200]
values: [10, 15, 20, 5, 15]
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Beispiele für Seminarwoche 12
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- name: RANDOM-WALK
algorithm: GRADIENT
one-based: true
coords-init: [3, 3]
landscape: &ref_landscape1
neighbourhoods:
radius: 1
# metric: MANHATTAN
metric: MAXIMUM
labels:
- x
- y
values:
- [5, 2, 1, 3, 4, 7]
- [8, 4, 3, 5, 5, 6]
- [9, 1, 2, 6, 8, 4]
- [7, 4, 4, 3, 7, 3]
- [6, 4, 2, 1, 0, 7]
- [4, 3, 5, 2, 1, 8]
optimise: MAX
- name: RANDOM-WALK
algorithm: ADAPTIVE
one-based: true
coords-init: [3, 3]
landscape: *ref_landscape1
optimise: MAX
- name: RANDOM-WALK
algorithm: METROPOLIS
annealing: false
temperature-init: 3.
one-based: true
coords-init: [5, 3]
landscape: *ref_landscape1
optimise: MAX
- name: RANDOM-WALK
algorithm: METROPOLIS
annealing: false
temperature-init: 3.
one-based: false
coords-init: [0]
landscape:
neighbourhoods:
radius: 1
metric: MANHATTAN
labels:
- x
values: [4, 6.5, 2]
optimise: MAX
- name: GENETIC
population:
- [3, 5, 4, 1, 6, 7, 2, 8, 9]
- [4, 5, 3, 2, 1, 6, 7, 8, 9]
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Beispiele für Seminarwoche 13
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- name: EUKLID
numbers:
- 2017
- 58
- name: POLLARD-RHO
growth: LINEAR
# growth: EXPONENTIAL
number: 534767
x-init: 5

View File

@ -0,0 +1,46 @@
info:
author: Raj Dahya
title: Algorithmen und Datenstrukturen 2
description: |-
Ein Code-Projekt, das Algorithmen und Datenstrukturen aus dem Kurs
ADS2 an der Universität Leipzig (Sommersemester 2022)
implementiert.
options:
# log-level: DEBUG
log-level: INFO
verbose: &ref_verbose true
tarjan:
verbose: *ref_verbose
tsp:
verbose: *ref_verbose
hirschberg:
# standardwerte sind (1, 1) und (2, 1):
penality-gap: 1
penality-mismatch: 1
# niedrigerer Wert ==> höhere Priorität:
move-priorities:
diagonal: 0
horizontal: 1
vertical: 2
# verbose: []
verbose:
- COSTS
- MOVES
show: []
# show:
# - ATOMS
# - TREE
rucksack:
verbose: *ref_verbose
show: []
# show:
# - ALL-WEIGHTS
# - ALL-SUMS
genetic:
verbose: *ref_verbose
random-walk:
verbose: *ref_verbose
euklid:
verbose: *ref_verbose
pollard-rho:
verbose: *ref_verbose

1
code/python/dist/VERSION vendored Normal file
View File

@ -0,0 +1 @@
0.3.1

View File

@ -0,0 +1,9 @@
# Command
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | [**EnumAlgorithmNames**](EnumAlgorithmNames.md) | | [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,10 @@
# CommandEuklid
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | [**EnumAlgorithmNames**](EnumAlgorithmNames.md) | | [default to null]
**numbers** | [**List**](integer.md) | | [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,10 @@
# CommandGenetic
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | [**EnumAlgorithmNames**](EnumAlgorithmNames.md) | | [default to null]
**population** | [**List**](array.md) | | [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,12 @@
# CommandHirschberg
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | [**EnumAlgorithmNames**](EnumAlgorithmNames.md) | | [default to null]
**word1** | [**String**](string.md) | Word that gets placed vertically in algorithm. | [default to null]
**word2** | [**String**](string.md) | Word that gets placed horizontally in algorithm | [default to null]
**once** | [**Boolean**](boolean.md) | | [optional] [default to false]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,12 @@
# CommandPollard
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | [**EnumAlgorithmNames**](EnumAlgorithmNames.md) | | [default to null]
**number** | [**Integer**](integer.md) | | [default to null]
**growth** | [**EnumPollardGrowthRate**](EnumPollardGrowthRate.md) | | [default to null]
**xMinusinit** | [**Integer**](integer.md) | | [optional] [default to 2]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,16 @@
# CommandRandomWalk
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | [**EnumAlgorithmNames**](EnumAlgorithmNames.md) | | [default to null]
**algorithm** | [**EnumWalkMode**](EnumWalkMode.md) | | [default to null]
**landscape** | [**DataTypeLandscapeGeometry**](DataTypeLandscapeGeometry.md) | | [default to null]
**optimise** | [**EnumOptimiseMode**](EnumOptimiseMode.md) | | [default to null]
**coordsMinusinit** | [**List**](integer.md) | Initial co-ordinates to start the algorithm. | [optional] [default to null]
**temperatureMinusinit** | [**Float**](float.md) | | [optional] [default to null]
**annealing** | [**Boolean**](boolean.md) | | [optional] [default to false]
**oneMinusbased** | [**Boolean**](boolean.md) | | [optional] [default to false]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,15 @@
# CommandRucksack
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | [**EnumAlgorithmNames**](EnumAlgorithmNames.md) | | [default to null]
**algorithm** | [**EnumRucksackAlgorithm**](EnumRucksackAlgorithm.md) | | [default to null]
**allowMinusfractional** | [**Boolean**](boolean.md) | | [optional] [default to false]
**maxMinuscost** | [**BigDecimal**](number.md) | Upper bound for total cost of rucksack. | [default to null]
**costs** | [**List**](number.md) | Array of cost for each item (e.g. volume, weight, price, time, etc.). | [default to null]
**values** | [**List**](number.md) | Value extracted from each item (e.g. energy, profit, etc.). | [default to null]
**items** | [**List**](string.md) | Optional names of the items (if empty, defaults to 1-based indexes). | [optional] [default to []]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,11 @@
# CommandTarjan
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | [**EnumAlgorithmNames**](EnumAlgorithmNames.md) | | [default to null]
**nodes** | [**List**](anyOf&lt;integer,number,string&gt;.md) | | [default to null]
**edges** | [**List**](array.md) | | [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,11 @@
# CommandTsp
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | [**EnumAlgorithmNames**](EnumAlgorithmNames.md) | | [default to null]
**dist** | [**List**](array.md) | | [default to null]
**optimise** | [**EnumOptimiseMode**](EnumOptimiseMode.md) | | [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,11 @@
# DataTypeLandscapeGeometry
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**neighbourhoods** | [**DataTypeLandscapeNeighbourhoods**](DataTypeLandscapeNeighbourhoods.md) | | [default to null]
**labels** | [**List**](string.md) | | [default to null]
**values** | [**DataTypeLandscapeValues**](DataTypeLandscapeValues.md) | | [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,10 @@
# DataTypeLandscapeNeighbourhoods
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**radius** | [**BigDecimal**](number.md) | | [optional] [default to 1]
**metric** | [**EnumLandscapeMetric**](EnumLandscapeMetric.md) | | [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# DataTypeLandscapeValues
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# EnumAlgorithmNames
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# EnumLandscapeMetric
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# EnumOptimiseMode
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# EnumPollardGrowthRate
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# EnumRucksackAlgorithm
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# EnumWalkMode
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,38 @@
# Documentation for Schemata for command instructions
<a name="documentation-for-api-endpoints"></a>
## Documentation for API Endpoints
All URIs are relative to *http://.*
Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
<a name="documentation-for-models"></a>
## Documentation for Models
- [Command](.//Models/Command.md)
- [CommandEuklid](.//Models/CommandEuklid.md)
- [CommandGenetic](.//Models/CommandGenetic.md)
- [CommandHirschberg](.//Models/CommandHirschberg.md)
- [CommandPollard](.//Models/CommandPollard.md)
- [CommandRandomWalk](.//Models/CommandRandomWalk.md)
- [CommandRucksack](.//Models/CommandRucksack.md)
- [CommandTarjan](.//Models/CommandTarjan.md)
- [CommandTsp](.//Models/CommandTsp.md)
- [DataTypeLandscapeGeometry](.//Models/DataTypeLandscapeGeometry.md)
- [DataTypeLandscapeNeighbourhoods](.//Models/DataTypeLandscapeNeighbourhoods.md)
- [DataTypeLandscapeValues](.//Models/DataTypeLandscapeValues.md)
- [EnumAlgorithmNames](.//Models/EnumAlgorithmNames.md)
- [EnumLandscapeMetric](.//Models/EnumLandscapeMetric.md)
- [EnumOptimiseMode](.//Models/EnumOptimiseMode.md)
- [EnumPollardGrowthRate](.//Models/EnumPollardGrowthRate.md)
- [EnumRucksackAlgorithm](.//Models/EnumRucksackAlgorithm.md)
- [EnumWalkMode](.//Models/EnumWalkMode.md)
<a name="documentation-for-authorization"></a>
## Documentation for Authorization
All endpoints do not require authorization.

View File

@ -0,0 +1,18 @@
# AppOptions
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**logMinuslevel** | [**EnumLogLevel**](EnumLogLevel.md) | | [default to null]
**verbose** | [**Boolean**](boolean.md) | Global setting for verbosity. | [optional] [default to false]
**tarjan** | [**AppOptions_tarjan**](AppOptions_tarjan.md) | | [default to null]
**tsp** | [**AppOptions_tarjan**](AppOptions_tarjan.md) | | [default to null]
**hirschberg** | [**AppOptions_hirschberg**](AppOptions_hirschberg.md) | | [default to null]
**rucksack** | [**AppOptions_rucksack**](AppOptions_rucksack.md) | | [default to null]
**randomMinuswalk** | [**AppOptions_tarjan**](AppOptions_tarjan.md) | | [default to null]
**genetic** | [**AppOptions_tarjan**](AppOptions_tarjan.md) | | [default to null]
**euklid** | [**AppOptions_tarjan**](AppOptions_tarjan.md) | | [default to null]
**pollardMinusrho** | [**AppOptions_tarjan**](AppOptions_tarjan.md) | | [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,13 @@
# AppOptionsHirschberg
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**penalityMinusmismatch** | [**BigDecimal**](number.md) | | [default to 1]
**penalityMinusgap** | [**BigDecimal**](number.md) | | [default to 1]
**moveMinuspriorities** | [**AppOptions_hirschberg_move_priorities**](AppOptions_hirschberg_move_priorities.md) | | [default to null]
**verbose** | [**List**](EnumHirschbergVerbosity.md) | | [optional] [default to []]
**show** | [**List**](EnumHirschbergShow.md) | | [optional] [default to []]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,11 @@
# AppOptionsHirschbergMovePriorities
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**diagonal** | [**Integer**](integer.md) | | [optional] [default to 0]
**horizontal** | [**Integer**](integer.md) | | [optional] [default to 1]
**vertical** | [**Integer**](integer.md) | | [optional] [default to 2]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,10 @@
# AppOptionsRucksack
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**verbose** | [**Boolean**](boolean.md) | | [optional] [default to false]
**show** | [**List**](EnumRucksackShow.md) | | [optional] [default to []]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,9 @@
# AppOptionsTarjan
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**verbose** | [**Boolean**](boolean.md) | | [default to false]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,10 @@
# Config
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**info** | [**Info**](Info.md) | | [default to null]
**options** | [**AppOptions**](AppOptions.md) | | [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# EnumHirschbergShow
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# EnumHirschbergVerbosity
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# EnumLogLevel
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,8 @@
# EnumRucksackShow
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,11 @@
# Info
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**title** | [**String**](string.md) | | [default to null]
**description** | [**String**](string.md) | | [default to null]
**author** | [**String**](string.md) | | [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,31 @@
# Documentation for Schemata for config models
<a name="documentation-for-api-endpoints"></a>
## Documentation for API Endpoints
All URIs are relative to *http://.*
Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
<a name="documentation-for-models"></a>
## Documentation for Models
- [AppOptions](.//Models/AppOptions.md)
- [AppOptionsHirschberg](.//Models/AppOptionsHirschberg.md)
- [AppOptionsHirschbergMovePriorities](.//Models/AppOptionsHirschbergMovePriorities.md)
- [AppOptionsRucksack](.//Models/AppOptionsRucksack.md)
- [AppOptionsTarjan](.//Models/AppOptionsTarjan.md)
- [Config](.//Models/Config.md)
- [EnumHirschbergShow](.//Models/EnumHirschbergShow.md)
- [EnumHirschbergVerbosity](.//Models/EnumHirschbergVerbosity.md)
- [EnumLogLevel](.//Models/EnumLogLevel.md)
- [EnumRucksackShow](.//Models/EnumRucksackShow.md)
- [Info](.//Models/Info.md)
<a name="documentation-for-authorization"></a>
## Documentation for Authorization
All endpoints do not require authorization.

199
code/python/justfile Normal file
View File

@ -0,0 +1,199 @@
# set shell := [ "bash", "-uc" ]
_default:
@- just --unsorted --choose
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Justfile
# NOTE: Do not change the contents of this file!
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# VARIABLES
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PYTHON := if os_family() == "windows" { "py -3" } else { "python3" }
GEN_MODELS := "datamodel-codegen"
GEN_MODELS_DOCUMENTATION := "openapi-generator"
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Macros
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_create-file-if-not-exists fname:
@touch "{{fname}}";
_create-folder-if-not-exists path:
@if ! [ -d "{{path}}" ]; then mkdir "{{path}}"; fi
_delete-if-file-exists fname:
@if [ -f "{{fname}}" ]; then rm "{{fname}}"; fi
_delete-if-folder-exists path:
@if [ -d "{{path}}" ]; then rm -rf "{{path}}"; fi
_clean-all-files pattern:
@find . -type f -name "{{pattern}}" -exec basename {} \; 2> /dev/null
@- find . -type f -name "{{pattern}}" -exec rm {} \; 2> /dev/null
_clean-all-folders pattern:
@find . -type d -name "{{pattern}}" -exec basename {} \; 2> /dev/null
@- find . -type d -name "{{pattern}}" -exec rm -rf {} \; 2> /dev/null
_docker-build-and-log service:
@docker compose up --build -d {{service}} && docker compose logs -f --tail=0 {{service}}
_docker-build-and-interact service container:
@docker compose up --build -d {{service}} && docker attach {{container}}
_generate-models path name:
@{{GEN_MODELS}} \
--input-file-type openapi \
--encoding "UTF-8" \
--disable-timestamp \
--use-schema-description \
--allow-population-by-field-name \
--snake-case-field \
--strict-nullable \
--target-python-version 3.9 \
--input {{path}}/{{name}}-schema.yaml \
--output {{path}}/generated/{{name}}.py
_generate-models-documentation path_schema path_docs name:
@- {{GEN_MODELS_DOCUMENTATION}} generate \
--input-spec {{path_schema}}/{{name}}-schema.yaml \
--generator-name markdown \
--output "{{path_docs}}/{{name}}"
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# TARGETS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# TARGETS: build
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
build:
@just build-requirements
@just _check-system-requirements
@just build-models
build-requirements:
@{{PYTHON}} -m pip install --disable-pip-version-check -r requirements.txt
build-models:
@echo "Generate data models from schemata."
@just _delete-if-folder-exists "models/generated"
@just _create-folder-if-not-exists "models/generated"
@- just _generate-models "models" "config"
@- just _generate-models "models" "commands"
build-documentation:
@echo "Generate documentations data models from schemata."
@just _delete-if-folder-exists "docs"
@just _create-folder-if-not-exists "docs"
@- just _generate-models-documentation "models" "docs" "config"
@- just _generate-models-documentation "models" "docs" "commands"
@- just _clean-all-files ".openapi-generator*"
@- just _clean-all-folders ".openapi-generator*"
dist:
@just build
@just build-documentation
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# TARGETS: run
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
run:
@{{PYTHON}} main.py
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# TARGETS: tests
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
tests: tests-unit
tests-logs:
@just _create-logs
@- just tests
@just _display-logs
tests-unit-logs:
@just _create-logs
@- just tests-unit
@just _display-logs
tests-unit:
@{{PYTHON}} -m pytest tests \
--ignore=tests/integration \
--cov-reset \
--cov=. \
2> /dev/null
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# TARGETS: qa
# NOTE: use for development only.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
qa:
@{{PYTHON}} -m coverage report -m
coverage source_path tests_path:
@just _create-logs
@-just _coverage-no-logs "{{source_path}}" "{{tests_path}}"
@just _display-logs
_coverage-no-logs source_path tests_path:
@{{PYTHON}} -m pytest {{tests_path}} \
--ignore=tests/integration \
--cov-reset \
--cov={{source_path}} \
--capture=tee-sys \
2> /dev/null
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# TARGETS: clean
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
clean:
@-just clean-basic
@-just clean-sessions
clean-sessions:
@echo "All sessions will be force removed."
@- just _delete-if-folder-exists ".secrets" 2> /dev/null
clean-basic:
@echo "All system artefacts will be force removed."
@- just _clean-all-files ".DS_Store" 2> /dev/null
@echo "All test artefacts will be force removed."
@- just _clean-all-folders ".pytest_cache" 2> /dev/null
@- just _delete-if-file-exists ".coverage" 2> /dev/null
@- just _delete-if-folder-exists "logs"
@echo "All build artefacts will be force removed."
@- just _clean-all-folders "__pycache__" 2> /dev/null
@- just _delete-if-folder-exists "models/generated" 2> /dev/null
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# TARGETS: logging, session
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_create-logs:
@# For logging purposes (since stdout is rechanneled):
@just _delete-if-file-exists "logs/debug.log"
@just _create-folder-if-not-exists "logs"
@just _create-file-if-not-exists "logs/debug.log"
_display-logs:
@echo ""
@echo "Content of logs/debug.log:"
@echo "----------------"
@echo ""
@- cat logs/debug.log
@echo ""
@echo "----------------"
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# TARGETS: requirements
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
check-system:
@echo "Operating System detected: {{os_family()}}."
@echo "Python command used: {{PYTHON}}."
_check-system-requirements:
@if ! ( {{GEN_MODELS}} --help >> /dev/null 2> /dev/null ); then \
echo "Command '{{GEN_MODELS}}' did not work. Ensure that the installation of 'datamodel-code-generator' worked and that system paths are set." \
exit 1; \
fi
@if ! ( {{GEN_MODELS_DOCUMENTATION}} --help >> /dev/null 2> /dev/null ); then \
echo "Command '{{GEN_MODELS_DOCUMENTATION}}' did not work. Ensure that the installation of 'datamodel-code-generator' worked and that system paths are set." \
exit 1; \
fi

49
code/python/main.py Normal file
View File

@ -0,0 +1,49 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
import os;
import sys
os.chdir(os.path.join(os.path.dirname(__file__)));
sys.path.insert(0, os.getcwd());
from src.models.config import *;
from src.core import log;
from src.setup import config;
from src import api;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# MAIN METHOD
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def enter(*args: str):
# set logging level:
log.configure_logging(config.LOG_LEVEL);
# process inputs:
if len(args) == 0:
# Führe befehle in Assets aus:
for command in config.COMMANDS:
result = api.run_command(command);
# ignored if log-level >> DEBUG
log.log_result(result, debug=True);
else:
# Führe CLI-Befehl aus:
result = api.run_command_from_json(args[0]);
# ignored if log-level >> DEBUG
log.log_result(result, debug=True);
return;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXECUTION
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if __name__ == '__main__':
sys.tracebacklimit = 0;
# NOTE: necessary for Windows, to ensure that console output is rendered correctly:
os.system('');
enter(*sys.argv[1:]);

View File

@ -0,0 +1,51 @@
# Models #
- In this folder the configuration files for the models of data classes (as `*.yaml`-files) are stored.
- These are interpreted by `openapi` to generate the classes for the source code during the `build` phase of the programme.
- Once the models are generated, the app configurations are read at run time and interpreted as these models.
- Generating instead of manually encoding the classes is safer/more stable as it allows for validation.
## Folder structure ##
As per the 3rd point, the `yaml`-file/s for the models (schemata)
and the `yaml`-file/s for actual configuration values for the app
are to be kept separately.
All models are to be stored in [./models](../models/),
whereas configuration files are to be stored in [./assets](../assets/).
Note that the generated python files are not stored in the repository.
When deployed on a server, these are generated as part of the `build`-process.
## Developer notes ##
### Prerequisites ###
For the python source code, we currently use:
- Python: `v3.10.*`
- Modules:
- `datamodel-code-generator==0.12.0`
(This is all taken care of during the `build` process.)
### Building the models ###
To build the models before run time, use the following command + options:
```bash
datamodel-codegen
--input-file-type openapi
--encoding "UTF-8"
--disable-timestamp
--use-schema-description
--snake-case-field
--strict-nullable
--input <path/to/file.yml>
--output <path/to/file.py>
```
(cf. https://pydantic-docs.helpmanual.io/datamodel_code_generator/).
Alternatively, call:
```
just build
```

View File

@ -0,0 +1,372 @@
openapi: 3.0.3
info:
version: 0.3.1
title: Schemata for command instructions
servers:
- url: "."
paths: {}
components:
schemas:
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Commands
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Commands:
description: |-
List of commands to test algorithms/datastructures.
type: array
items:
$ref: "#/components/schemas/Command"
default: []
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Command
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Command:
description: |-
Instructions for command to call
type: object
required:
- name
properties:
name:
$ref: '#/components/schemas/EnumAlgorithmNames'
additionalProperties: true
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Algorithm: Tarjan
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CommandTarjan:
description: |-
Instructions for execution of Tarjan-Algorithm
type: object
required:
- name
- nodes
- edges
properties:
name:
$ref: '#/components/schemas/EnumAlgorithmNames'
nodes:
type: array
items:
anyOf:
- type: integer
- type: number
- type: string
edges:
type: array
items:
type: array
minItems: 2
maxItems: 2
items:
anyOf:
- type: integer
- type: number
- type: string
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Algorithm: TSP
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CommandTsp:
description: |-
Instructions for execution of TSP-Algorithm
type: object
required:
- name
- optimise
- dist
properties:
name:
$ref: '#/components/schemas/EnumAlgorithmNames'
dist:
type: array
items:
type: array
items:
type: number
optimise:
$ref: '#/components/schemas/EnumOptimiseMode'
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Algorithm: Hirschberg
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CommandHirschberg:
description: |-
Instructions for execution of Hirschberg-Algorithm
type: object
required:
- name
- word1
- word2
properties:
name:
$ref: '#/components/schemas/EnumAlgorithmNames'
word1:
description: Word that gets placed vertically in algorithm.
type: string
word2:
description: Word that gets placed horizontally in algorithm
type: string
once:
type: boolean
default: false
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Algorithm: Rucksack Branch & Bound
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CommandRucksack:
description: |-
Instructions for execution of Branch & Bound-Algorithm for the Rucksack-Problem
type: object
required:
- name
- algorithm
- max-cost
- costs
- values
properties:
name:
$ref: '#/components/schemas/EnumAlgorithmNames'
algorithm:
$ref: '#/components/schemas/EnumRucksackAlgorithm'
allow-fractional:
type: boolean
default: false
max-cost:
description: Upper bound for total cost of rucksack.
type: number
minimum: 0
costs:
description: Array of cost for each item (e.g. volume, weight, price, time, etc.).
type: array
items:
type: number
exclusiveMinimum: 0
values:
description: Value extracted from each item (e.g. energy, profit, etc.).
type: array
items:
type: number
items:
description: Optional names of the items (if empty, defaults to 1-based indexes).
type: array
items:
type: string
default: []
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Algorithm: Random Walk
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CommandRandomWalk:
description: |-
Instructions for execution of random walks to determine local extrema in a fitness landscape
type: object
required:
- name
- algorithm
- landscape
- optimise
properties:
name:
$ref: '#/components/schemas/EnumAlgorithmNames'
algorithm:
$ref: '#/components/schemas/EnumWalkMode'
landscape:
$ref: '#/components/schemas/DataTypeLandscapeGeometry'
optimise:
$ref: '#/components/schemas/EnumOptimiseMode'
coords-init:
description: Initial co-ordinates to start the algorithm.
type: array
items:
type: integer
minItems: 1
temperature-init:
type: float
default: 1.
annealing:
type: boolean
default: false
one-based:
type: boolean
default: false
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Algorithm: Genetic Algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CommandGenetic:
description: |-
Instructions for execution of the Genetic algorithm
type: object
required:
- name
- population
properties:
name:
$ref: '#/components/schemas/EnumAlgorithmNames'
population:
type: array
items:
type: array
items:
type: string
minItems: 2
# maxItems: 2 # FIXME: does not work!
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Algorithm: Euklidean algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CommandEuklid:
description: |-
Instructions for execution of the Euklidean gcd-algorithm
type: object
required:
- name
- numbers
properties:
name:
$ref: '#/components/schemas/EnumAlgorithmNames'
numbers:
type: array
items:
type: integer
exclusiveMinimum: 0
minItems: 2
maxItems: 2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Algorithm: Pollard's rho
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CommandPollard:
description: |-
Instructions for execution of the Pollard's rho algorithm
type: object
required:
- name
- growth
- number
properties:
name:
$ref: '#/components/schemas/EnumAlgorithmNames'
number:
type: integer
exclusiveMinimum: 0
growth:
$ref: '#/components/schemas/EnumPollardGrowthRate'
x-init:
type: integer
default: 2
minimum: 2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Data-type Landscape Geometry, Landscape Values
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DataTypeLandscapeGeometry:
description: |-
Structure for the geometry of a fitness landscape
type: object
required:
- neighbourhoods
- labels
- values
properties:
neighbourhoods:
$ref: '#/components/schemas/DataTypeLandscapeNeighbourhoods'
labels:
type: array
items:
type: string
minItems: 1
values:
$ref: '#/components/schemas/DataTypeLandscapeValues'
DataTypeLandscapeNeighbourhoods:
description: |-
Options for the definition of discrete neighbourhoods of a fitness landscape
type: object
required:
- metric
properties:
radius:
type: number
minimum: 1
default: 1
metric:
$ref: '#/components/schemas/EnumLandscapeMetric'
DataTypeLandscapeValues:
description: |-
A (potentially multi-dimensional) array of values for the fitness landscape.
oneOf:
- type: array
items:
type: number
- type: array
items:
$ref: '#/components/schemas/DataTypeLandscapeValues'
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Enum Algorithm Names
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EnumAlgorithmNames:
description: |-
Enumeration of possible algorithm options.
type: string
enum:
- TARJAN
- TSP
- HIRSCHBERG
- RUCKSACK
- RANDOM-WALK
- GENETIC
- EUKLID
- POLLARD-RHO
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Enum Optimise Mode
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EnumOptimiseMode:
description: |-
Enumeration of optimisation modi.
type: string
enum:
- MIN
- MAX
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Enum Rucksack mode for algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EnumRucksackAlgorithm:
description: |-
Enumeration of mode for Rucksack problem
type: string
enum:
- GREEDY
- BRANCH-AND-BOUND
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Enum Type for choice of growth rate in Pollard Algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EnumPollardGrowthRate:
description: |-
Via the 'tail-chasing' period finding method in Pollard's rho algorithm,
the difference between the indexes of the pseudo-random sequence
can be chosen to growth according to different rates, e.g.
- `LINEAR` - choose `x[k]` and `x[2k]`
- `EXPONENTIAL` - choose `x[k]` and `x[2^{k}]`
type: string
enum:
- LINEAR
- EXPONENTIAL
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Enum Type of walk mode for fitness walk algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EnumWalkMode:
description: |-
Enumeration of walk mode for fitness walk algorithm
- `ADAPTIVE` - points uniformly randomly chosen from nbhd.
- `GRADIENT` - points uniformly randomly chosen amongst points in nbhd with steepest gradient.
- `METROPOLIS` - points uniformly randomly chosen from nbhd. or by entropy.
type: string
enum:
- ADAPTIVE
- GRADIENT
- METROPOLIS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Enum for metric for neighbourhoods in fitness landscape
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EnumLandscapeMetric:
description: |-
Enumeration of mode for Rucksack problem
- `MAXIMUM` - `Q` is a neighbour of `P` <==> `max_i d(P_i, Q_i) <= r`
- `MANHATTAN` - `Q` is a neighbour of `P` <==> `sum_i d(P_i, Q_i) <= r`
type: string
enum:
- MAXIMUM
- MANHATTAN

View File

@ -0,0 +1,205 @@
openapi: 3.0.3
info:
version: 0.3.1
title: Schemata for config models
servers:
- url: "."
paths: {}
components:
schemas:
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Config
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Config:
description: |-
Data model for all parts of the configuration.
type: object
required:
- info
- options
- calls
properties:
info:
$ref: "#/components/schemas/Info"
options:
$ref: "#/components/schemas/AppOptions"
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Info
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Info:
description: |-
Contains meta data about project.
type: object
required:
- title
- description
- author
properties:
title:
type: string
description:
type: string
author:
type: string
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# App Options
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AppOptions:
description: |-
Options pertaining to the rudimentary setup of the app.
type: object
required:
- log-level
- tsp
- tarjan
- hirschberg
- rucksack
- random-walk
- genetic
- euklid
- pollard-rho
properties:
log-level:
$ref: '#/components/schemas/EnumLogLevel'
verbose:
description: Global setting for verbosity.
type: boolean
default: false
tarjan:
type: object
required:
- verbose
properties:
verbose:
type: boolean
default: false
tsp:
type: object
required:
- verbose
properties:
verbose:
type: boolean
default: false
hirschberg:
type: object
required:
- penality-mismatch
- penality-gap
- move-priorities
properties:
penality-mismatch:
type: number
default: 1
penality-gap:
type: number
default: 1
move-priorities:
type: object
properties:
diagonal:
type: integer
minimum: 0
default: 0
horizontal:
type: integer
minimum: 0
default: 1
vertical:
type: integer
minimum: 0
default: 2
verbose:
type: array
items:
$ref: '#/components/schemas/EnumHirschbergVerbosity'
default: []
show:
type: array
items:
$ref: '#/components/schemas/EnumHirschbergShow'
default: []
rucksack:
type: object
required: []
properties:
verbose:
type: boolean
default: false
show:
type: array
items:
$ref: '#/components/schemas/EnumRucksackShow'
default: []
random-walk:
type: object
required:
- verbose
properties:
verbose:
type: boolean
default: false
genetic:
type: object
required:
- verbose
properties:
verbose:
type: boolean
default: false
euklid:
type: object
required:
- verbose
properties:
verbose:
type: boolean
default: false
pollard-rho:
type: object
required:
- verbose
properties:
verbose:
type: boolean
default: false
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Enum LogLevel
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EnumLogLevel:
description: |-
Enumeration of settings for log level.
type: string
enum:
- INFO
- DEBUG
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Enum Hirschberg - Verbosity options
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EnumHirschbergVerbosity:
description: |-
Enumeration of verbosity options for Hirschberg
type: string
enum:
- COSTS
- MOVES
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Enum Hirschberg - display options
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EnumHirschbergShow:
description: |-
Enumeration of display options for Hirschberg
type: string
enum:
- TREE
- ATOMS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Enum Rucksack - display options
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EnumRucksackShow:
description: |-
Enumeration of display options for the Rucksack problem
type: string
enum:
- ALL-WEIGHTS
- ALL-SUMS

View File

@ -0,0 +1,56 @@
[project]
name = "uni-leipzig-ads-2-2022"
version = "0.3.1"
description = "Zusatzcode, um Algorithmen und Datenstrukturen im Kurs ADS2 zu demonstrieren."
authors = [ "Raj Dahya" ]
maintainers = [ "raj_mathe" ]
license = "MIT"
readme = "README.md"
python = "^3.10"
homepage = "https://gitea.math.uni-leipzig.de/raj_mathe"
repository = "https://gitea.math.uni-leipzig.de/raj_mathe/ads2_2022"
documentation = "https://gitea.math.uni-leipzig.de/raj_mathe/ads2_2022/README.md"
keywords = [
"algorithmmen und datenstrukturen 2",
"sommersemester",
"2022",
"universität leipzig",
]
# cf. https://pypi.org/classifiers
classifiers = [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: Unix",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
]
[tool.pytest.ini_options]
minversion = "7.1.1"
testpaths = [
"tests",
]
python_files = [
"**/tests_*.py",
]
asyncio_mode = "auto"
filterwarnings = [
"error",
"ignore::UserWarning",
"ignore::DeprecationWarning",
]
# NOTE: appends (not prepends) flags:
addopts = [
"--order-dependencies",
"--order-group-scope=module",
"--cache-clear",
"--verbose",
"--maxfail=1",
"-k test_",
"--no-cov-on-fail",
"--cov-report=term",
"--cov-config=.coveragerc",
]

View File

@ -0,0 +1,36 @@
pip>=22.1.2
wheel>=0.37.1
# running
anyio>=3.5.0
aiohttp>=3.8.1
asyncio>=3.4.3
codetiming>=1.3.0
# testing + dev
coverage[toml]>=6.4
pytest>=7.1.1
pytest-asyncio>=0.18.3
pytest-cov>=3.0.0
pytest-lazy-fixture>=0.6.3
pytest-order>=1.0.1
testfixtures>=6.18.5
# config
python-dotenv>=0.2.0
jsonschema>=4.4.0
lazy-load>=0.8.2
pyyaml>=6.0
pydantic>=1.9.0
datamodel-code-generator>=0.13.0
openapi-generator-cli>=4.3.1
# misc
lorem>=0.1.1
safetywrap>=1.5.0
typing>=3.7.4.3
# maths
numpy>=1.22.3
pandas>=1.4.1
tabulate>=0.8.9

View File

View File

View File

@ -0,0 +1,16 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.algorithms.euklid.algorithms import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'euklidean_algorithm',
];

View File

@ -0,0 +1,97 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
from models.generated.config import *;
from src.core.utils import *;
from src.models.euklid import *;
from src.algorithms.euklid.display import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'euklidean_algorithm',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD euklidean algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def euklidean_algorithm(
a: int,
b: int,
verbose: bool = False,
) -> Tuple[int, int, int]:
'''
Führt den Euklideschen Algorithmus aus, um den größten gemeinsamen Teiler (ggT, en: gcd)
von zwei positiven Zahlen zu berechnen.
'''
################
# NOTE:
# Lemma: gcd(a,b) = gcd(mod(a, b), b)
# Darum immer weiter (a, b) durch (b, gcd(a,b)) ersetzen, bis b == 0.
################
steps = [];
d = 0;
while True:
if b == 0:
d = a;
steps.append(Step(a=a, b=b, gcd=d, div=0, rem=a, coeff_a=1, coeff_b=0));
break;
else:
# Berechne k, r so dass a = k·b + r mit k ≥ 0, 0 ≤ r < b:
r = a % b;
k = math.floor(a / b);
# Speichere Berechnungen:
steps.append(Step(a=a, b=b, gcd=0, div=k, rem=r, coeff_a=0, coeff_b=0));
# ersetze a, b durch b, r:
a = b;
b = r;
################
# NOTE:
# In jedem step gilt
# a = k·b + r
# und im folgenden gilt:
# d = coeff_a'·a' + coeff_b'·b'
# wobei
# a' = b
# b' = r
# Darum:
# d = coeff_a'·b + coeff_b'·(a - k·b)
# = coeff_b'·a + (coeff_a' - k·coeff_b)·b
# Darum:
# coeff_a = coeff_b'
# coeff_b = coeff_a' - k·coeff_b
################
coeff_a = 1;
coeff_b = 0;
for step in steps[::-1][1:]:
(coeff_a, coeff_b) = (coeff_b, coeff_a - step.div * coeff_b);
step.coeff_a = coeff_a;
step.coeff_b = coeff_b;
step.gcd = d;
if verbose:
step = steps[0];
repr = display_table(steps=steps, reverse=True);
expr = display_sum(step=step);
print('');
print('\x1b[1mEuklidescher Algorithmus\x1b[0m');
print('');
print(repr);
print('');
print('\x1b[1mLösung\x1b[0m');
print('');
print(f'a=\x1b[1m{step.a}\x1b[0m; b=\x1b[1m{step.b}\x1b[0m; d = \x1b[1m{step.gcd}\x1b[0m = {expr}.');
print('');
return d, coeff_a, coeff_b;

View File

@ -0,0 +1,56 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from src.thirdparty.maths import *;
from src.thirdparty.types import *;
from src.models.euklid import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'display_table',
'display_sum',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD display table
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def display_table(
steps: List[Step],
reverse: bool = False,
) -> str:
if reverse:
steps = steps[::-1];
table = pd.DataFrame({
'a': [step.a for step in steps],
'b': [step.b for step in steps],
'div': ['-' if step.b == 0 else step.div for step in steps],
'gcd': [step.gcd for step in steps],
'expr': [f'= {display_sum(step=step)}' for step in steps],
}) \
.reset_index(drop=True);
# benutze pandas-Dataframe + tabulate, um schöner darzustellen:
repr = tabulate(
table,
headers=['a', 'b', 'floor(a/b)', 'gcd(a,b)', 'gcd(a,b)=x·a + y·b'],
showindex=False,
colalign=('right', 'right', 'right', 'center', 'left'),
tablefmt='simple'
);
return repr;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD display table
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def display_sum(step: Step) -> str:
return f'\x1b[1m{step.coeff_a}\x1b[0m·a + \x1b[1m{step.coeff_b}\x1b[0m·b' ;

View File

@ -0,0 +1,16 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.algorithms.genetic.algorithms import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'genetic_algorithm',
];

View File

@ -0,0 +1,38 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
from models.generated.config import *;
from src.core.log import *;
from src.core.utils import *;
from src.models.genetic import *;
from src.algorithms.genetic.display import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'genetic_algorithm',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD genetic algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def genetic_algorithm(
individual1: List[str],
individual2: List[str],
verbose: bool,
):
'''
Führt den genetischen Algorithmus auf 2 Individuen aus.
'''
log_warn('Noch nicht implementiert!');
return;

View File

@ -0,0 +1,30 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from src.thirdparty.maths import *;
from src.thirdparty.types import *;
from src.core.log import *;
from src.models.genetic import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'display_table',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD display table
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def display_table(
) -> str:
log_warn('Noch nicht implementiert!');
return '';

View File

@ -0,0 +1,18 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.algorithms.hirschberg.algorithms import *;
from src.algorithms.hirschberg.display import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'simple_algorithm',
'hirschberg_algorithm',
];

View File

@ -0,0 +1,147 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
from models.generated.config import *;
from src.models.hirschberg.penalties import *;
from src.algorithms.hirschberg.display import *;
from src.algorithms.hirschberg.matrix import *;
from src.algorithms.hirschberg.paths import *;
from src.models.hirschberg.alignment import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'hirschberg_algorithm',
'simple_algorithm',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD hirschberg_algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def simple_algorithm(
X: str,
Y: str,
verbose: List[EnumHirschbergVerbosity] = [],
) -> Tuple[str, str]:
'''
Dieser Algorithmus berechnet die Edit-Distanzen + optimale Richtungen ein Mal.
Darus wird ein optimales Alignment direkt abgeleitet.
'''
Costs, Moves = compute_cost_matrix(X = '-' + X, Y = '-' + Y);
path = reconstruct_optimal_path(Moves=Moves);
word_x, word_y = reconstruct_words(X = '-' + X, Y = '-' + Y, moves=[Moves[coord] for coord in path], path=path);
if verbose != []:
repr = display_cost_matrix(Costs=Costs, path=path, X = '-' + X, Y = '-' + Y, verbose=verbose);
display = word_y + f'\n{"-"*len(word_x)}\n' + word_x;
print(f'\n{repr}\n\n\x1b[1mOptimales Alignment:\x1b[0m\n\n{display}\n');
return word_x, word_y;
def hirschberg_algorithm(
X: str,
Y: str,
verbose: List[EnumHirschbergVerbosity] = [],
show: List[EnumHirschbergShow] = [],
) -> Tuple[str, str]:
'''
Der Hirschberg-Algorithmus berechnet nur die Edit-Distanzen (Kostenmatrix)
und weder speichert noch berechnet die Matrix der optimalen Richtungen.
Dies liefert eine Platz-effizientere Methode als die simple Methode.
Durch Rekursion wird eine Art Traceback durch die zugrunde liegende DP erreicht.
Daraus wird unmittelbar ein optimales Alignment bestimmt.
Des Weiteren werden Zeitkosten durch Divide-and-Conquer klein gehalten.
'''
align = hirschberg_algorithm_step(X=X, Y=Y, depth=1, verbose=verbose, show=show);
word_x = align.as_string1();
word_y = align.as_string2();
# verbose output hier behandeln (irrelevant für Algorithmus):
if verbose != []:
if EnumHirschbergShow.tree in show:
display = align.astree(braces=True);
else:
display_x = align.as_string1(braces=True);
display_y = align.as_string2(braces=True);
display = display_y + f'\n{"-"*len(display_x)}\n' + display_x;
print(f'\n\x1b[1mOptimales Alignment:\x1b[0m\n\n{display}\n');
return word_x, word_y;
def hirschberg_algorithm_step(
X: str,
Y: str,
depth: int = 0,
verbose: List[EnumHirschbergVerbosity] = [],
show: List[EnumHirschbergShow] = [],
) -> Alignment:
'''
Der rekursive Schritt der Hirschberg-Algorithmus teil eines der Wörter in zwei
und bestimmt eine entsprechende Aufteilung des zweiten Wortes in zwei,
die die Edit-Distanz minimiert.
Dies liefert uns Information über eine Stelle des optimalen Pfads durch die Kostenmatrix
sowie eine Aufteilung des Problems in eine linke und rechte Hälfte.
'''
n = len(Y);
if n == 1:
Costs, Moves = compute_cost_matrix(X = '-' + X, Y = '-' + Y);
path = reconstruct_optimal_path(Moves=Moves);
word_x, word_y = reconstruct_words(X = '-' + X, Y = '-' + Y, moves=[Moves[coord] for coord in path], path=path);
# verbose output hier behandeln (irrelevant für Algorithmus):
if verbose != [] and (EnumHirschbergShow.atoms in show):
repr = display_cost_matrix(Costs=Costs, path=path, X = '-' + X, Y = '-' + Y, verbose=verbose);
print(f'\n\x1b[1mRekursionstiefe: {depth}\x1b[0m\n\n{repr}')
return AlignmentBasic(word1=word_x, word2=word_y);
else:
n = int(np.ceil(n/2));
# bilde linke Hälfte vom horizontalen Wort:
Y1 = Y[:n];
X1 = X;
# bilde rechte Hälfte vom horizontalen Wort (und kehre h. + v. um):
Y2 = Y[n:][::-1];
X2 = X[::-1];
# Löse Teilprobleme:
Costs1, Moves1 = compute_cost_matrix(X = '-' + X1, Y = '-' + Y1);
Costs2, Moves2 = compute_cost_matrix(X = '-' + X2, Y = '-' + Y2);
# verbose output hier behandeln (irrelevant für Algorithmus):
if verbose != []:
path1, path2 = reconstruct_optimal_path_halves(Costs1=Costs1, Costs2=Costs2, Moves1=Moves1, Moves2=Moves2);
repr = display_cost_matrix_halves(
Costs1 = Costs1,
Costs2 = Costs2,
path1 = path1,
path2 = path2,
X1 = '-' + X1,
X2 = '-' + X2,
Y1 = '-' + Y1,
Y2 = '-' + Y2,
verbose = verbose,
);
print(f'\n\x1b[1mRekursionstiefe: {depth}\x1b[0m\n\n{repr}')
# Koordinaten des optimalen Übergangs berechnen:
coord1, coord2 = get_optimal_transition(Costs1=Costs1, Costs2=Costs2);
p = coord1[0];
# Divide and Conquer ausführen:
align_left = hirschberg_algorithm_step(X=X[:p], Y=Y[:n], depth=depth+1, verbose=verbose, show=show);
align_right = hirschberg_algorithm_step(X=X[p:], Y=Y[n:], depth=depth+1, verbose=verbose, show=show);
# Resultate zusammensetzen:
return AlignmentPair(left=align_left, right=align_right);

View File

@ -0,0 +1,123 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
from models.generated.config import *;
from src.models.hirschberg.penalties import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'represent_cost_matrix',
'display_cost_matrix',
'display_cost_matrix_halves',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHODS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def represent_cost_matrix(
Costs: np.ndarray, # NDArray[(Any, Any), int],
path: List[Tuple[int, int]],
X: str,
Y: str,
verbose: List[EnumHirschbergVerbosity],
pad: bool = False,
) -> np.ndarray: # NDArray[(Any, Any), Any]:
m = len(X); # display vertically
n = len(Y); # display horizontally
# erstelle string-Array:
if pad:
table = np.full(shape=(3 + m + 3, 3 + n + 1), dtype=object, fill_value='');
else:
table = np.full(shape=(3 + m, 3 + n), dtype=object, fill_value='');
# topmost rows:
table[0, 3:(3+n)] = [ f'\x1b[2m{j}\x1b[0m' for j in range(n) ];
table[1, 3:(3+n)] = [ f'\x1b[1m{y}\x1b[0m' for y in Y ];
table[2, 3:(3+n)] = '--';
# leftmost columns:
table[3:(3+m), 0] = [ f'\x1b[2m{i}\x1b[0m' for i in range(m) ];
table[3:(3+m), 1] = [ f'\x1b[1m{x}\x1b[0m' for x in X ];
table[3:(3+m), 2] = '|';
if pad:
table[-3, 3:(3+n)] = '--';
table[3:(3+m), -1] = '|';
if EnumHirschbergVerbosity.costs in verbose:
table[3:(3+m), 3:(3+n)] = Costs.copy();
if EnumHirschbergVerbosity.moves in verbose:
for (i, j) in path:
table[3 + i, 3 + j] = f'\x1b[31;4;1m{table[3 + i, 3 + j]}\x1b[0m';
elif EnumHirschbergVerbosity.moves in verbose:
table[3:(3+m), 3:(3+n)] = '\x1b[2m.\x1b[0m';
for (i, j) in path:
table[3 + i, 3 + j] = '\x1b[31;1m*\x1b[0m';
return table;
def display_cost_matrix(
Costs: np.ndarray, # NDArray[(Any, Any), int],
path: List[Tuple[int, int]],
X: str,
Y: str,
verbose: EnumHirschbergVerbosity,
) -> str:
'''
Zeigt Kostenmatrix + optimalen Pfad.
@inputs
- `Costs` - Kostenmatrix
- `Moves` - Kodiert die optimalen Schritte
- `X`, `Y` - Strings
@returns
- eine 'printable' Darstellung der Matrix mit den Strings X, Y + Indexes.
'''
table = represent_cost_matrix(Costs=Costs, path=path, X=X, Y=Y, verbose=verbose);
# benutze pandas-Dataframe + tabulate, um schöner darzustellen:
repr = tabulate(pd.DataFrame(table), showindex=False, stralign='center', tablefmt='plain');
return repr;
def display_cost_matrix_halves(
Costs1: np.ndarray, # NDArray[(Any, Any), int],
Costs2: np.ndarray, # NDArray[(Any, Any), int],
path1: List[Tuple[int, int]],
path2: List[Tuple[int, int]],
X1: str,
X2: str,
Y1: str,
Y2: str,
verbose: EnumHirschbergVerbosity,
) -> str:
'''
Zeigt Kostenmatrix + optimalen Pfad für Schritt im D & C Hirschberg-Algorithmus
@inputs
- `Costs1`, `Costs2` - Kostenmatrizen
- `Moves1`, `Moves2` - Kodiert die optimalen Schritte
- `X1`, `X2`, `Y1`, `Y2` - Strings
@returns
- eine 'printable' Darstellung der Matrix mit den Strings X, Y + Indexes.
'''
table1 = represent_cost_matrix(Costs=Costs1, path=path1, X=X1, Y=Y1, verbose=verbose, pad=True);
table2 = represent_cost_matrix(Costs=Costs2, path=path2, X=X2, Y=Y2, verbose=verbose, pad=True);
# merge Taellen:
table = np.concatenate([table1[:, :-1], table2[::-1, ::-1]], axis=1);
# benutze pandas-Dataframe + tabulate, um schöner darzustellen:
repr = tabulate(pd.DataFrame(table), showindex=False, stralign='center', tablefmt='plain');
return repr;

View File

@ -0,0 +1,127 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
from src.models.hirschberg import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'compute_cost_matrix',
'update_cost_matrix',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHODS cost matrix + optimal paths
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def compute_cost_matrix(
X: str,
Y: str,
) -> Tuple[np.ndarray, np.ndarray]: # Tuple[NDArray[(Any, Any), int], NDArray[(Any, Any), Directions]]:
'''
Berechnet Hirschberg-Costs-Matrix (ohne Rekursion).
Annahmen:
- X[0] = gap
- Y[0] = gap
'''
m = len(X); # display vertically
n = len(Y); # display horizontally
Costs = np.full(shape=(m, n), dtype=int, fill_value=0);
Moves = np.full(shape=(m, n), dtype=Directions, fill_value=Directions.UNSET);
# zuerst 0. Spalte und 0. Zeile ausfüllen:
for i, x in list(enumerate(X))[1:]:
update_cost_matrix(Costs, Moves, x, '', i, 0);
for j, y in list(enumerate(Y))[1:]:
update_cost_matrix(Costs, Moves, '', y, 0, j);
# jetzt alle »inneren« Werte bestimmen:
for i, x in list(enumerate(X))[1:]:
for j, y in list(enumerate(Y))[1:]:
update_cost_matrix(Costs, Moves, x, y, i, j);
return Costs, Moves;
def update_cost_matrix(
Costs: np.ndarray, # NDArray[(Any, Any), int],
Moves: np.ndarray, # NDArray[(Any, Any), Directions],
x: str,
y: str,
i: int,
j: int,
):
'''
Schrittweise Funktion zur Aktualisierung vom Eintrag `(i,j)` in der Kostenmatrix.
Annahme:
- alle »Vorgänger« von `(i,j)` in der Matrix sind bereits optimiert.
@inputs
- `Costs` - bisher berechnete Kostenmatrix
- `Moves` - bisher berechnete optimale Schritte
- `i`, `x` - Position und Wert in String `X` (»vertical« dargestellt)
- `j`, `y` - Position und Wert in String `Y` (»horizontal« dargestellt)
'''
# nichts zu tun, wenn (i, j) == (0, 0):
if i == 0 and j == 0:
Costs[0, 0] = 0;
return;
################################
# NOTE: Berechnung von möglichen Moves wie folgt.
#
# Fall 1: (i-1,j-1) ---> (i,j)
# ==> Stringvergleich ändert sich wie folgt:
# s1 s1 x
# ---- ---> ------
# s2 s2 y
#
# Fall 2: (i,j-1) ---> (i,j)
# ==> Stringvergleich ändert sich wie folgt:
# s1 s1 GAP
# ---- ---> -------
# s2 s2 y
#
# Fall 3: (i-1,j) ---> (i,j)
# ==> Stringvergleich ändert sich wie folgt:
# s1 s1 x
# ---- ---> -------
# s2 s2 GAP
#
# Diese Fälle berücksichtigen wir:
################################
edges = [];
if i > 0 and j > 0:
edges.append((
Directions.DIAGONAL,
Costs[i-1, j-1] + missmatch_penalty(x, y),
));
if j > 0:
edges.append((
Directions.HORIZONTAL,
Costs[i, j-1] + gap_penalty(y),
));
if i > 0:
edges.append((
Directions.VERTICAL,
Costs[i-1, j] + gap_penalty(x),
));
if len(edges) > 0:
# Sortiere nach Priorität (festgelegt in Enum):
edges = sorted(edges, key=lambda x: x[0].value);
# Wähle erste Möglichkeit mit minimalen Kosten:
index = np.argmin([ cost for _, cost in edges]);
Moves[i, j], Costs[i, j] = edges[index];
return;

View File

@ -0,0 +1,125 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
from src.models.hirschberg import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'get_optimal_transition',
'reconstruct_optimal_path',
'reconstruct_optimal_path_halves',
'reconstruct_words',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHODS optimaler treffpunkt
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def get_optimal_transition(
Costs1: np.ndarray, # NDArray[(Any, Any), int],
Costs2: np.ndarray, # NDArray[(Any, Any), int],
) -> Tuple[Tuple[int, int], Tuple[int, int]]:
'''
Rekonstruiere »Treffpunkt«, wo die Gesamtkosten minimiert sind.
Dieser Punkt stellt einen optimal Übergang für den Rekursionsschritt dar.
'''
(m, n1) = Costs1.shape;
(m, n2) = Costs2.shape;
info = [
(
Costs1[i, n1-1] + Costs2[m-1-i, n2-1],
(i, n1-1),
(m-1-i, n2-1),
)
for i in range(m)
];
index = np.argmin([ cost for cost, _, _ in info ]);
coord1 = info[index][1];
coord2 = info[index][2];
return coord1, coord2;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHODS reconstruction von words/paths
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def reconstruct_optimal_path(
Moves: np.ndarray, # NDArray[(Any, Any), Directions],
coord: Optional[Tuple[int, int]] = None,
) -> List[Tuple[int, int]]:
'''
Liest Matrix mit optimalen Schritten den optimalen Pfad aus,
angenfangen von Endkoordinaten.
'''
if coord is None:
m, n = Moves.shape;
(i, j) = (m-1, n-1);
else:
(i, j) = coord;
path = [(i, j)];
while (i, j) != (0, 0):
match Moves[i, j]:
case Directions.DIAGONAL:
(i, j) = (i - 1, j - 1);
case Directions.HORIZONTAL:
(i, j) = (i, j - 1);
case Directions.VERTICAL:
(i, j) = (i - 1, j);
case _:
break;
path.append((i, j));
return path[::-1];
def reconstruct_optimal_path_halves(
Costs1: np.ndarray, # NDArray[(Any, Any), int],
Costs2: np.ndarray, # NDArray[(Any, Any), int],
Moves1: np.ndarray, # NDArray[(Any, Any), Directions],
Moves2: np.ndarray, # NDArray[(Any, Any), Directions],
) -> Tuple[List[Tuple[int, int]], List[Tuple[int, int]]]:
'''
Rekonstruiere optimale Pfad für Rekursionsschritt,
wenn horizontales Wort in 2 aufgeteilt wird.
'''
coord1, coord2 = get_optimal_transition(Costs1=Costs1, Costs2=Costs2);
path1 = reconstruct_optimal_path(Moves1, coord=coord1);
path2 = reconstruct_optimal_path(Moves2, coord=coord2);
return path1, path2;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHODS reconstruction von words
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def reconstruct_words(
X: str,
Y: str,
moves: List[Directions],
path: List[Tuple[int, int]],
) -> Tuple[str, str]:
'''
Berechnet String-Alignment aus Path.
'''
word_x = '';
word_y = '';
for ((i, j), move) in zip(path, moves):
x = X[i];
y = Y[j];
match move:
case Directions.DIAGONAL:
word_x += x;
word_y += y;
case Directions.HORIZONTAL:
word_x += '-';
word_y += y;
case Directions.VERTICAL:
word_x += x;
word_y += '-';
return word_x, word_y;

View File

@ -0,0 +1,17 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.algorithms.pollard_rho.algorithms import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'pollard_rho_algorithm_linear',
'pollard_rho_algorithm_exponential',
];

View File

@ -0,0 +1,144 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
from models.generated.config import *;
from src.core.utils import *;
from src.models.pollard_rho import *;
from src.algorithms.pollard_rho.display import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'pollard_rho_algorithm_linear',
'pollard_rho_algorithm_exponential',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD pollard's rho algorithm - with linear grwoth
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def pollard_rho_algorithm_linear(
n: int,
x_init: int = 2,
verbose: bool = False,
):
steps = [];
success = False;
f = lambda _: fct(_, n=n);
d = 1;
x = y = x_init;
steps.append(Step(x=x));
k = 0;
k_next = 1;
while True:
# aktualisiere x: x = f(x_prev):
x = f(x);
# aktualisiere y: y = f(f(y_prev)):
y = f(f(y));
# ggT berechnen:
d = math.gcd(abs(x-y), n);
steps.append(Step(x=x, y=y, d=d));
# Abbruchkriterien prüfen:
if d == 1: # weitermachen, solange d == 1
k += 1;
continue;
elif d == n: # versagt
success = False;
break;
else:
success = True;
break;
if verbose:
repr = display_table_linear(steps=steps);
print('');
print('\x1b[1mEuklidescher Algorithmus\x1b[0m');
print('');
print(repr);
print('');
if success:
print('\x1b[1mBerechneter Faktor:\x1b[0m');
print('');
print(f'd = \x1b[1m{d}\x1b[0m.');
else:
print('\x1b[91mKein (Prim)faktor erkannt!\x1b[0m');
print('');
return d;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD pollard's rho algorithm - with exponential grwoth
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def pollard_rho_algorithm_exponential(
n: int,
x_init: int = 2,
verbose: bool = False,
):
steps = [];
success = False;
f = lambda _: fct(_, n=n);
d = 1;
x = y = x_init;
steps.append(Step(x=x));
k = 0;
k_next = 1;
while True:
# aktualisiere x: x = f(x_prev):
x = f(x);
# aktualisiere y, wenn k = 2^j: y = x[j] = f(y_prev):
if k == k_next:
k_next = 2*k_next;
y = f(y);
# ggT berechnen:
d = math.gcd(abs(x-y), n);
steps.append(Step(x=x, y=y, d=d));
# Abbruchkriterien prüfen:
if d == 1: # weitermachen, solange d == 1
k += 1;
continue;
elif d == n: # versagt
success = False;
break;
else:
success = True;
break;
if verbose:
repr = display_table_exponential(steps=steps);
print('');
print('\x1b[1mEuklidescher Algorithmus\x1b[0m');
print('');
print(repr);
print('');
if success:
print('\x1b[1mBerechneter Faktor:\x1b[0m');
print('');
print(f'd = \x1b[1m{d}\x1b[0m.');
else:
print('\x1b[91mKein (Prim)faktor erkannt!\x1b[0m');
print('');
return d;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# AUXILIARY METHOD function for Pollard's rho
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def fct(x: int, n: int) -> int:
return (x**2 - 1) % n;

View File

@ -0,0 +1,65 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from src.thirdparty.maths import *;
from src.thirdparty.types import *;
from src.models.pollard_rho import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'display_table_linear',
'display_table_exponential',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD display table - linear
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def display_table_linear(steps: List[Step]) -> str:
table = pd.DataFrame({
'i': [i for i in range(len(steps))],
'x': [step.x for step in steps],
'y': [step.y or '-' for step in steps],
'd': [step.d or '-' for step in steps],
}) \
.reset_index(drop=True);
# benutze pandas-Dataframe + tabulate, um schöner darzustellen:
repr = tabulate(
table,
headers=['i', 'x(i)', 'y(i) = x(2i)', 'gcd(|x - y|,n)'],
showindex=False,
colalign=('right', 'right', 'right', 'center'),
tablefmt='simple',
);
return repr;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD display table - exponential
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def display_table_exponential(steps: List[Step]) -> str:
table = pd.DataFrame({
'i': [i for i in range(len(steps))],
'x': [step.x for step in steps],
'y': [step.y or '-' for step in steps],
'd': [step.d or '-' for step in steps],
}) \
.reset_index(drop=True);
# benutze pandas-Dataframe + tabulate, um schöner darzustellen:
repr = tabulate(
table,
headers=['i', 'x(i)', 'y(i) = x([log₂(i)])', 'gcd(|x - y|,n)'],
showindex=False,
colalign=('right', 'right', 'right', 'center'),
tablefmt='simple',
);
return repr;

View File

@ -0,0 +1,18 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.algorithms.random_walk.algorithms import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'adaptive_walk_algorithm',
'gradient_walk_algorithm',
'metropolis_walk_algorithm',
];

View File

@ -0,0 +1,247 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.maths import *;
from src.thirdparty.plots import *;
from src.thirdparty.types import *;
from models.generated.config import *;
from models.generated.commands import *;
from src.core.log import *;
from src.core.utils import *;
from src.models.random_walk import *;
from src.algorithms.random_walk.display import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'adaptive_walk_algorithm',
'gradient_walk_algorithm',
'metropolis_walk_algorithm',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CONSTANTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MAX_ITERATIONS = 1000; # um endlose Schleifen zu verhindern
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD adaptive walk
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def adaptive_walk_algorithm(
landscape: Landscape,
r: float,
coords_init: tuple,
optimise: EnumOptimiseMode,
verbose: bool,
):
'''
Führt den Adapative-Walk Algorithmus aus, um ein lokales Minimum zu bestimmen.
'''
# lege Fitness- und Umgebungsfunktionen fest:
match optimise:
case EnumOptimiseMode.max:
f = lambda x: -landscape.fitness(*x);
case _:
f = lambda x: landscape.fitness(*x);
nbhd = lambda x: landscape.neighbourhood(*x, r=r, strict=True);
label = lambda x: landscape.label(*x);
# initialisiere
steps = [];
x = coords_init;
fx = f(x);
fy = fx;
N = nbhd(x);
# führe walk aus:
k = 0;
while k < MAX_ITERATIONS:
# Wähle zufälligen Punkt und berechne fitness-Wert:
y = uniform_random_choice(N);
fy = f(y);
# Nur dann aktualisieren, wenn sich f-Wert verbessert:
if fy < fx:
# Punkt + Umgebung + f-Wert aktualisieren
x = y;
fx = fy;
N = nbhd(x);
step = Step(coords=x, label=label(x), improved=True, changed=True);
else:
# Nichts (außer logging) machen!
step = Step(coords=x, label=label(x));
# Nur dann (erfolgreich) abbrechen, wenn f-Wert lokal Min:
if fx <= min([f(y) for y in N], default=fx):
step.stopped = True;
steps.append(step);
break;
steps.append(step);
k += 1;
return x;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD gradient walk
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def gradient_walk_algorithm(
landscape: Landscape,
r: float,
coords_init: tuple,
optimise: EnumOptimiseMode,
verbose: bool,
):
'''
Führt den Gradient-Descent (bzw. Ascent) Algorithmus aus, um ein lokales Minimum zu bestimmen.
'''
# lege Fitness- und Umgebungsfunktionen fest:
match optimise:
case EnumOptimiseMode.max:
f = lambda x: -landscape.fitness(*x);
case _:
f = lambda x: landscape.fitness(*x);
nbhd = lambda x: landscape.neighbourhood(*x, r=r, strict=True);
label = lambda x: landscape.label(*x);
# initialisiere
steps = [];
x = coords_init;
fx = landscape.fitness(*x);
fy = fx;
N = nbhd(x);
f_values = [f(y) for y in N];
fmin = min(f_values);
Z = [y for y, fy in zip(N, f_values) if fy == fmin];
# führe walk aus:
k = 0;
while k < MAX_ITERATIONS:
# Wähle zufälligen Punkt mit steilstem Abstieg und berechne fitness-Wert:
y = uniform_random_choice(Z);
fy = fmin;
# Nur dann aktualisieren, wenn sich f-Wert verbessert:
if fy < fx:
# Punkt + Umgebung + f-Wert aktualisieren
x = y;
fx = fy;
N = nbhd(y);
f_values = [f(y) for y in N];
fmin = min(f_values);
Z = [y for y, fy in zip(N, f_values) if fy == fmin];
step = Step(coords=x, label=label(x), improved=True, changed=True);
else:
# Nichts (außer logging) machen!
step = Step(coords=x, label=label(x));
# Nur dann (erfolgreich) abbrechen, wenn f-Wert lokal Min:
if fx <= min([f(y) for y in N], default=fx):
step.stopped = True;
steps.append(step);
break;
steps.append(step);
k += 1;
return x;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD metropolis walk
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def metropolis_walk_algorithm(
landscape: Landscape,
r: float,
coords_init: tuple,
T: float,
annealing: bool,
optimise: EnumOptimiseMode,
verbose: bool,
):
'''
Führt den Metropolis-Walk Algorithmus aus, um ein lokales Minimum zu bestimmen.
'''
# lege Fitness- und Umgebungsfunktionen fest:
match optimise:
case EnumOptimiseMode.max:
f = lambda x: -landscape.fitness(*x);
case _:
f = lambda x: landscape.fitness(*x);
nbhd = lambda x: landscape.neighbourhood(*x, r=r, strict=True);
label = lambda x: landscape.label(*x);
# definiere anzahl der hinreichenden Schritt für Stabilität:
n_stable = 2*(3**(landscape.dim) - 1);
# initialisiere
x = coords_init;
fx = f(x);
fy = fx;
nbhd_x = nbhd(x);
steps = [];
step = Step(coords=x, label=label(x));
# führe walk aus:
k = 0;
n_unchanged = 0;
while k < MAX_ITERATIONS:
# Wähle zufälligen Punkt und berechne fitness-Wert:
y = uniform_random_choice(nbhd_x);
fy = f(y);
p = math.exp(-abs(fy-fx)/T);
u = random_binary(p);
# Aktualisieren, wenn sich f-Wert verbessert
# oder mit einer Wahrscheinlichkeit von p:
if fy < fx or u:
# Punkt + Umgebung + f-Wert aktualisieren
x = y;
fx = fy;
nbhd_x = nbhd(x);
n_unchanged = 0;
step = Step(coords=x, label=label(x), improved=(fy < fx), chance=u, probability=p, changed=True);
else:
# Nichts (außer logging) machen!
n_unchanged += 1;
step = Step(coords=x, label=label(x));
# »Temperatur« ggf. abkühlen:
if annealing:
T = cool_temperature(T, k);
# Nur dann (erfolgreich) abbrechen, wenn f-Wert lokal Min:
if n_unchanged >= n_stable:
step.stopped = True;
steps.append(step);
break;
steps.append(step);
k += 1;
if verbose:
for step in steps:
print(step);
return x;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# AUXILIARY METHODS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def cool_temperature(T: float, k: int, const: float = 2.) -> float:
harm = const*(k + 1);
return T/(1 + T/harm);

View File

@ -0,0 +1,30 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from src.thirdparty.maths import *;
from src.thirdparty.types import *;
from src.core.log import *;
from src.models.random_walk import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'display_table',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD display table
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def display_table(
) -> str:
log_warn('Noch nicht implementiert!');
return '';

View File

@ -0,0 +1,17 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.algorithms.rucksack.algorithms import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'rucksack_greedy_algorithm',
'rucksack_branch_and_bound_algorithm',
];

View File

@ -0,0 +1,262 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
from models.generated.config import *;
from src.core.utils import *;
from src.models.rucksack import *;
from src.models.stacks import *;
from src.algorithms.rucksack.display import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'rucksack_greedy_algorithm',
'rucksack_branch_and_bound_algorithm',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD greedy algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def rucksack_greedy_algorithm(
max_cost: float,
costs: np.ndarray,
values: np.ndarray,
items: np.ndarray,
fractional: bool,
verbose: bool,
) -> Solution:
'''
Durch den Greedy-Algorithm wird der optimale Wert eines Rucksacks
unter Rücksicht der Kapizitätsschranke eingeschätzt.
NOTE: Wenn man `fractional = True` verwendet, liefert der Algorithmus
eine obere Schranke des maximalen Wertes beim Originalproblem.
'''
# sortiere daten:
order = get_sort_order(costs=costs, values=values);
# verbose output hier behandeln (irrelevant für Algorithmus):
if verbose:
repr = display_order(order=order, costs=costs, values=values, items=items, one_based=True);
print('');
print('\x1b[1mRucksack Problem - Greedy\x1b[0m');
print('');
print(repr);
print('');
# führe greedy aus:
n = len(costs);
cost_total = 0;
choice = [ Fraction(0) for _ in range(n) ];
for i in order:
# füge Item i hinzu, solange das Gesamtgewicht noch <= Schranke
if cost_total + costs[i] <= max_cost:
cost_total += costs[i];
choice[i] = Fraction(1);
# falls Bruchteile erlaubt sind, füge einen Bruchteil des i. Items hinzu und abbrechen
elif fractional:
choice[i] = Fraction(Fraction(max_cost - cost_total)/Fraction(costs[i]), _normalize=False);
break;
# ansonsten weiter machen:
else:
continue;
# Aspekte der Lösung speichern:
rucksack = [i for i, v in enumerate(choice) if v > 0]; # Indexes von Items im Rucksack
soln = Solution(
order = order,
choice = choice,
items = items[rucksack].tolist(),
costs = costs[rucksack].tolist(),
values = values[rucksack].tolist(),
);
# verbose output hier behandeln (irrelevant für Algorithmus):
if verbose:
repr_rucksack = display_rucksack(items=items, costs=costs, values=values, choice=choice);
print('\x1b[1mEingeschätzte Lösung\x1b[0m');
print('');
print(f'Mask: [{", ".join(map(str, soln.choice))}]');
print('Rucksack:')
print(repr_rucksack);
print('');
# Lösung ausgeben
return soln;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD branch and bound algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def rucksack_branch_and_bound_algorithm(
max_cost: float,
costs: np.ndarray,
values: np.ndarray,
items: np.ndarray,
verbose: bool,
) -> Solution:
'''
Durch Branch & Bound wird der optimale Wert eines Rucksacks
unter Rücksicht der Kapizitätsschranke exakt und effizienter bestimmt.
'''
order = get_sort_order(costs=costs, values=values);
# verbose output hier behandeln (irrelevant für Algorithmus):
if verbose:
repr = display_order(order=order, costs=costs, values=values, items=items, one_based=True);
print('');
print('\x1b[1mRucksack Problem - Branch & Bound\x1b[0m');
print('');
print(repr);
print('');
logged_steps = [];
step: Step;
mask = empty_mask(n=len(costs));
bound = np.inf;
S = Stack();
S.push(mask);
while not S.empty():
# top-Element auslesen und Bound berechnen:
A: Mask = S.top();
bound_subtree, choice, order_, state = estimate_lower_bound(mask=A, max_cost=max_cost, costs=costs, values=values, items=items);
# für logging (irrelevant für Algorithmus):
if verbose:
step = Step(bound=bound, bound_subtree=bound_subtree, stack_str=str(S), choice=choice, order=order_, indexes=A.indexes_unset, solution=state);
if bound_subtree < bound:
if state is not None:
step.move = EnumBranchAndBoundMove.BOUND;
step.bound = bound_subtree;
else:
step.move = EnumBranchAndBoundMove.BRANCH;
logged_steps.append(step);
S.pop();
# Update nur nötig, wenn die (eingeschätzte) untere Schranke von A das bisherige Minimum verbessert:
if bound_subtree < bound:
# Bound aktualisieren, wenn sich A nicht weiter aufteilen od. wenn sich A wie eine einelementige Option behandeln läst:
if state is not None:
bound = bound_subtree;
mask = state;
# Branch sonst
else:
B, C = A.split();
S.push(B);
# Nur dann C auf Stack legen, wenn mind. eine Möglichkeit in C die Kapazitätsschranke erfüllt:
if sum(costs[C.indexes_one]) <= max_cost:
S.push(C);
# Aspekte der Lösung speichern
rucksack = mask.indexes_one; # Indexes von Items im Rucksack
soln = Solution(
order = order,
choice = mask.choice,
items = items[rucksack].tolist(),
values = values[rucksack].tolist(),
costs = costs[rucksack].tolist(),
);
# verbose output hier behandeln (irrelevant für Algorithmus):
if verbose:
repr = display_branch_and_bound(values=values, steps=logged_steps);
repr_rucksack = display_rucksack(items=items, costs=costs, values=values, choice=mask.choice);
print(repr);
print('');
print('\x1b[1mLösung\x1b[0m');
print('');
print(f'Mask: [{", ".join(map(str, soln.choice))}]');
print('Rucksack:');
print(repr_rucksack);
print('');
# Lösung ausgeben
return soln;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# AUXILIARY METHOD resort
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def get_sort_order(costs: np.ndarray, values: np.ndarray) -> List[int]:
'''
Sortiert Daten absteigend nach values/costs.
'''
n = len(costs);
indexes = list(range(n));
margin = [ value/cost for cost, value in zip(costs, values) ];
order = sorted(indexes, key=lambda i: -margin[i]);
return order;
def estimate_lower_bound(
mask: Mask,
max_cost: float,
costs: np.ndarray,
values: np.ndarray,
items: np.ndarray,
) -> Tuple[float, List[Fraction], List[int], Optional[Mask]]:
'''
Wenn partielle Information über den Rucksack festgelegt ist,
kann man bei dem unbekannten Teil das Rucksack-Problem
mit Greedy-Algorithmus »lösen«,
um schnell eine gute Einschätzung zu bestimmen.
NOTE: Diese Funktion wird `g(mask)` im Skript bezeichnet.
'''
indexes_one = mask.indexes_one;
indexes_unset = mask.indexes_unset;
n = len(mask);
choice = np.zeros(shape=(n,), dtype=Fraction);
order = np.asarray(range(n));
# Berechnungen bei Items mit bekanntem Status in Rucksack:
value_rucksack = sum(values[indexes_one]);
cost_rucksack = sum(costs[indexes_one]);
choice[indexes_one] = Fraction(1);
# Für Rest des Rucksacks (Items mit unbekanntem Status):
cost_rest = max_cost - cost_rucksack;
state = None;
# Prüfe, ob man als Lösung alles/nichts hinzufügen kann:
if len(indexes_unset) == 0:
state = mask;
value_rest = 0;
elif sum(costs[indexes_unset]) <= cost_rest:
state = mask.pad(MaskValue.ONE);
choice[indexes_unset] = Fraction(1);
value_rest = sum(values[indexes_unset]);
elif min(costs[indexes_unset]) > cost_rest:
state = mask.pad(MaskValue.ZERO);
choice[indexes_unset] = Fraction(0);
value_rest = 0;
# Sonst mit Greedy-Algorithmus lösen:
# NOTE: Lösung ist eine Überschätzung des max-Wertes.
else:
soln_rest = rucksack_greedy_algorithm(
max_cost = cost_rest, # <- Kapazität = Restgewicht
costs = costs[indexes_unset],
values = values[indexes_unset],
items = items[indexes_unset],
fractional = True,
verbose = False,
);
choice[indexes_unset] = soln_rest.choice;
value_rest = soln_rest.total_value;
# Berechne Permutation für Teilrucksack
permute_part(order, indexes=indexes_unset, order=soln_rest.order, in_place=True);
# Einschätzung des max-Wertes:
value_max_est = value_rucksack + value_rest;
# Ausgabe mit -1 multiplizieren (weil maximiert wird):
return -value_max_est, choice.tolist(), order.tolist(), state;

View File

@ -0,0 +1,174 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from src.thirdparty.maths import *;
from src.thirdparty.types import *;
from src.core.utils import *;
from src.setup import config;
from models.generated.config import *;
from src.models.stacks import *;
from src.models.rucksack import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'display_order',
'display_rucksack',
'display_branch_and_bound',
'display_sum',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD display order
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def display_order(
order: List[int],
costs: np.ndarray,
values: np.ndarray,
items: np.ndarray,
one_based: bool = False,
) -> str:
table = pd.DataFrame({
'items': items,
'order': iperm(order),
'values': values,
'costs': costs,
'margin': [f'{value/cost:.6f}' for cost, value in zip(costs, values)],
}) \
.reset_index(drop=True);
if one_based:
table['order'] += 1;
# benutze pandas-Dataframe + tabulate, um schöner darzustellen:
repr = tabulate(
table,
headers=['item', 'greedy order', 'value', 'cost', 'value/cost'],
showindex=False,
colalign=('left', 'center', 'center', 'center', 'right'),
tablefmt='rst'
);
return repr;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD display rucksack
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def display_rucksack(
items: np.ndarray,
costs: np.ndarray,
values: np.ndarray,
choice: List[Fraction],
) -> str:
show_options = config.OPTIONS.rucksack.show;
render = lambda r: f'{r:g}';
choice = np.asarray(choice);
rucksack = np.where(choice > 0);
if not(EnumRucksackShow.all_weights in show_options):
items = items[rucksack];
costs = costs[rucksack];
values = values[rucksack];
choice = choice[rucksack];
table = pd.DataFrame({
'items': items.tolist() + ['----', ''],
'nr': list(map(str, choice))
+ ['----', f'\x1b[92;1m{float(sum(choice)):g}\x1b[0m'],
'costs': list(map(render, costs))
+ ['----', f'\x1b[92;1m{sum(choice*costs):g}\x1b[0m'],
'values': list(map(render, values))
+ ['----', f'\x1b[92;1m{sum(choice*values):g}\x1b[0m'],
});
repr = tabulate(
table,
headers=['item', 'nr', 'cost', 'value'],
showindex=False,
colalign=('left', 'center', 'center', 'center'),
tablefmt='rst'
);
return repr;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD display result of branch and bound
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def display_branch_and_bound(values: np.ndarray, steps: List[Step]) -> str:
show_options = config.OPTIONS.rucksack.show;
show_all_sums = (EnumRucksackShow.all_sums in show_options);
rows = [];
used_choices = [];
index_soln = max([-1] + [ i for i, step in enumerate(steps) if step.move == EnumBranchAndBoundMove.BOUND ]);
for i, step in enumerate(steps):
if show_all_sums or step.choice not in used_choices:
# Füge Summen-Ausdrücke für Greedy-Alg hinzu:
used_choices.append(step.choice);
expr = display_sum(choice=step.choice, values=values, as_maximum=False, order=step.order, indexes=step.indexes);
else:
expr = '';
bound_str = f'{step.bound:+g}';
solution_str = f'{step.solution or ""}';
move_str = ('' if step.move == EnumBranchAndBoundMove.NONE else step.move.value);
if i == index_soln:
bound_str = f'* \x1b[92;1m{bound_str}\x1b[0m';
rows.append({
'bound': f'{bound_str}',
'bound_subtree': f'{step.bound_subtree:g}',
'bound_subtree_sum': expr,
'stack': step.stack_str,
'solution': f'\x1b[2m{solution_str}\x1b[0m',
'move': f'\x1b[2m{move_str}\x1b[0m',
});
table = pd.DataFrame(rows).reset_index(drop=True);
# benutze pandas-Dataframe + tabulate, um schöner darzustellen:
repr = tabulate(
table,
headers=['bound', 'g(TOP(S))', '', 'S — stack', '\x1b[2msoln\x1b[0m', '\x1b[2mmove\x1b[0m'],
showindex=False,
colalign=('right', 'right', 'left', 'right', 'center', 'left'),
tablefmt='simple'
);
return repr;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD display sum
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def display_sum(
choice: List[Fraction],
values: np.ndarray,
order: Optional[List[int]] = None,
indexes: List[int] = [],
as_maximum: bool = True,
) -> str:
show_options = config.OPTIONS.rucksack.show;
show_all_weights = (EnumRucksackShow.all_weights in show_options);
def render(x: Tuple[bool, Fraction, float]):
b, u, value = x;
if u == 0:
expr = f'\x1b[94;2m{value:g}\x1b[0m' if b else f'\x1b[2m{value:g}\x1b[0m';
else:
expr = f'\x1b[94m{value:g}\x1b[0m' if b else f'\x1b[0m{value:g}\x1b[0m';
if not show_all_weights and u == 1:
return expr;
return f'\x1b[2;4m{u}\x1b[0m\x1b[2m·\x1b[0m{expr}';
parts = [ (i in indexes, u, x) for i, (u, x) in enumerate(zip(choice, values)) ];
if not (order is None):
parts = [ parts[j] for j in order ];
if not show_all_weights:
parts = list(filter(lambda x: x[1] > 0, parts));
expr = '\x1b[2m + \x1b[0m'.join(map(render, parts));
if as_maximum:
return f'\x1b[2m=\x1b[0m {expr}';
return f'\x1b[2m= -(\x1b[0m{expr}\x1b[2m)\x1b[0m';

View File

@ -0,0 +1,16 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.algorithms.tarjan.algorithms import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'tarjan_algorithm',
];

View File

@ -0,0 +1,196 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from __future__ import annotations;
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
from src.core.log import *;
from src.models.stacks import *;
from src.models.graphs import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'tarjan_algorithm',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CONSTANTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
class State(Enum):
UNTOUCHED = 0;
PENDING = 1;
FINISHED = 2;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Tarjan Algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def tarjan_algorithm(G: Graph, verbose: bool = False) -> List[Any]:
'''
# Tarjan Algorithm #
Runs the Tarjan-Algorithm to compute the strongly connected components.
'''
# initialise state - mark all nodes as UNTOUCHED:
ctx = Context(G);
# loop through all nodes and carry out Tarjan-Algorithm, provided node not already visitted.
for u in G.nodes:
if ctx.get_state(u) == State.UNTOUCHED:
tarjan_visit(G, u, ctx);
if verbose:
repr = ctx.repr();
print('');
print(f'\x1b[1mZusammenfassung der Ausführung des Tarjan-Algorithmus\x1b[0m');
print('');
print(repr);
print('');
print('\x1b[1mStark zshgd Komponenten:\x1b[0m')
print('');
for component in ctx.components:
print(component);
print('');
return ctx.components;
def tarjan_visit(G: Graph, u: Any, ctx: Context):
'''
Recursive depth-first search algorithm to compute strongly components of a graph.
'''
# Place node on stack + initialise visit-index + component-index.
ctx.max_index += 1;
ctx.push(u);
ctx.set_least_index(u, ctx.max_index);
ctx.set_index(u, ctx.max_index);
ctx.set_state(u, State.PENDING);
'''
Compute strongly connected components of each child node.
NOTE: Child nodes remain on stack, if and only if parent is in same component.
'''
for v in G.successors(u):
# Visit child node only if untouched:
if ctx.get_state(v) == State.UNTOUCHED:
tarjan_visit(G, v, ctx);
ctx.set_least_index(u, min(ctx.get_least_index(u), ctx.get_least_index(v)));
# Otherwise update associated component-index of parent node, if in same component as child:
elif ctx.stack_contains(v):
ctx.set_least_index(u, min(ctx.get_least_index(u), ctx.get_index(v)));
ctx.set_state(u, State.FINISHED);
ctx.log_info(u);
# If at least-index of component pop everything from stack up to least index and add component:
if ctx.get_index(u) == ctx.get_least_index(u):
component = [];
while True:
v = ctx.top();
ctx.pop();
component.append(v);
if u == v:
break;
ctx.components.append(component);
return;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# AUXILIARY context variables for algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@dataclass
class NodeInformationDefault:
node: Any = field(default=None);
least_index: int = field(default=0);
index: int = field(default=0);
state: State = field(default=State.UNTOUCHED, repr=False);
class NodeInformation(NodeInformationDefault):
def __init__(self, u: Any):
super().__init__();
self.node = u;
@dataclass
class ContextDefault:
max_index: int = field(default=0);
verbose: bool = field(default=False);
stack: Stack = field(default_factory=lambda: Stack());
components: list[list[Any]] = field(default_factory=list);
infos: dict[Any, NodeInformation] = field(default_factory=dict);
finished: List[Any] = field(default_factory=list);
class Context(ContextDefault):
def __init__(self, G: Graph):
super().__init__();
self.infos = { u: NodeInformation(u) for u in G.nodes };
def push(self, u: Any):
self.stack.push(u);
def top(self) -> Any:
return self.stack.top();
def pop(self) -> Any:
return self.stack.pop();
def update_infos(self, u: Any, info: NodeInformation):
self.infos[u] = info;
def set_state(self, u: Any, state: State):
info = self.infos[u];
info.state = state;
self.update_infos(u, info);
def set_least_index(self, u: Any, least_index: int):
info = self.infos[u];
info.least_index = least_index;
self.update_infos(u, info);
def set_index(self, u: Any, index: int):
info = self.infos[u];
info.index = index;
self.update_infos(u, info);
def stack_contains(self, u: Any) -> bool:
return self.stack.contains(u);
def get_info(self, u: Any) -> NodeInformation:
return self.infos[u];
def get_state(self, u: Any) -> State:
return self.get_info(u).state;
def get_least_index(self, u: Any) -> int:
return self.get_info(u).least_index;
def get_index(self, u: Any) -> int:
return self.get_info(u).index;
def log_info(self, u: Any):
self.finished.append(u);
def repr(self) -> str:
table = pd.DataFrame([ self.infos[u] for u in self.finished ]) \
.drop(columns='state');
table = table[['node', 'index', 'least_index']];
# benutze pandas-Dataframe + tabulate, um schöner darzustellen:
repr = tabulate(
table,
headers = {
'Knoten': 'node',
'Idx': 'index',
'min. Idx': 'least_index',
},
showindex = False,
stralign = 'center',
tablefmt = 'grid',
);
return repr;

View File

@ -0,0 +1,16 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.algorithms.tsp.algorithms import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'tsp_algorithm',
];

View File

@ -0,0 +1,75 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'tsp_algorithm',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHOD tsp_algorithm
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def tsp_algorithm(
dist: np.ndarray, # NDArray[(Any, Any), float],
optimise = min,
verbose: bool = False,
) -> tuple[float, list[list[int]]]:
m, n = dist.shape[:2];
assert m == n;
memory: Dict[tuple[int, tuple], tuple[float, list[list[int]]]] = dict();
def g(i: int, S: list[int]) -> tuple[float, list[list[int]]]:
# wenn g bereits für den Input definiert ist, gib diesen Wert zurück:
if (i, tuple(S)) not in memory.keys():
if len(S) == 0:
paths = [[i]] if i == 0 else [[i, 0]];
memory[(i, tuple(S))] = (dist[i,0], paths);
else:
values_and_paths = [ (j, *g(j, (*S[:index], *S[(index+1):]))) for index, j in enumerate(S) ];
# berechne d(i,j) + g(j, S \ {i}) for each j in S:
values_and_paths = [ (j, dist[i,j] + value, paths) for j, value, paths in values_and_paths];
value = optimise([value for _, value, _ in values_and_paths]);
paths = [];
for j, value_, paths_ in values_and_paths:
if value_ == value:
paths += [ [i, *path] for path in paths_ ];
memory[(i, tuple(S))] = (value, paths);
return memory[(i, tuple(S))];
# berechne g(0, {1,2,...,n-1}):
optimal_wert = g(0, [i for i in range(1,n)]);
if verbose:
display_computation(n, memory);
return optimal_wert, [];
def display_computation(n: int, memory: Dict[tuple[int, tuple], tuple[float, list[list[int]]]]):
keys = sorted(memory.keys(), key=lambda key: (len(key[1]), key[0], key[1]));
addone = lambda x: x + 1;
for k in range(0,n):
print(f'\x1b[4;1m|S| = {k}:\x1b[0m');
for (i, S) in keys:
if len(S) != k:
continue;
value, paths = memory[(i, S)];
print(f'g({addone(i)}, {list(map(addone, S))}) = {value}');
if len(paths) == 1:
print(f'optimal way: {" -> ".join(map(str, map(addone, paths[0])))}');
else:
print('optimal ways:');
for path in paths:
print(f'* {" -> ".join(map(str, map(addone, path)))}');
print('');
return;

51
code/python/src/api.py Normal file
View File

@ -0,0 +1,51 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from models.generated.commands import *;
from src.models.config import *
from src.endpoints import *;
from src.core.calls import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'run_command',
'run_command_from_json',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# API METHODS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@run_safely(tag='api-from-json')
def run_command_from_json(command_json: str) -> Result[CallResult, CallError]:
command = command_from_json(command_json);
return run_command(command);
@run_safely(tag='api-from-command')
def run_command(command: Command) -> Result[CallResult, CallError]:
if isinstance(command, CommandTarjan):
return endpoint_tarjan(command);
if isinstance(command, CommandTsp):
return endpoint_tsp(command);
elif isinstance(command, CommandHirschberg):
return endpoint_hirschberg(command);
elif isinstance(command, CommandRucksack):
return endpoint_rucksack(command);
elif isinstance(command, CommandRandomWalk):
return endpoint_random_walk(command);
elif isinstance(command, CommandGenetic):
return endpoint_genetic(command);
elif isinstance(command, CommandEuklid):
return endpoint_euklid(command);
elif isinstance(command, CommandPollard):
return endpoint_pollard_rho(command);
raise Exception(f'No endpoint set for `{command.name.value}`-command type.');

View File

View File

@ -0,0 +1,149 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from __future__ import annotations;
from src.thirdparty.code import *;
from src.thirdparty.misc import *;
from src.thirdparty.run import *;
from src.thirdparty.types import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'CallResult',
'CallError',
'run_safely',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CONSTANTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# local usage only
T = TypeVar('T');
V = TypeVar('V');
E = TypeVar('E', bound=list);
ARGS = ParamSpec('ARGS');
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CLASS Trace for debugging only!
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@dataclass
class CallResult(): # pragma: no cover
'''
An auxiliary class which keeps track of the latest return value during calls.
'''
action_taken: bool = field(default=False);
message: Optional[Any] = field(default=None);
@dataclass
class CallErrorRaw(): # pragma: no cover
timestamp: str = field();
tag: str = field();
errors: List[str] = field(default_factory=list);
class CallError(CallErrorRaw):
'''
An auxiliary class which keeps track of potentially multiple errors during calls.
'''
timestamp: str;
tag: str;
errors: List[str];
def __init__(self, tag: str, err: Any = Nothing()):
self.timestamp = str(datetime.now());
self.tag = tag;
self.errors = [];
if isinstance(err, list):
for e in err:
self.append(e);
else:
self.append(err);
def __len__(self) -> int:
return len(self.errors);
def append(self, e: Any):
if isinstance(e, Nothing):
return;
if isinstance(e, Some):
e = e.unwrap();
self.errors.append(str(e));
def extend(self, E: CallError):
self.errors.extend(E.errors);
def __repr__(self) -> str:
return f'CallError(tag=\'{self.tag}\', errors={self.errors})';
def __str__(self) -> str:
return self.__repr__();
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DECORATOR - forces methods to run safely
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def run_safely(tag: Union[str, None] = None, error_message: Union[str, None] = None):
'''
Creates a decorator for an action to perform it safely.
@inputs (parameters)
- `tag` - optional string to aid error tracking.
- `error_message` - optional string for an error message.
### Example usage ###
```py
@run_safely(tag='recognise int', error_message='unrecognise string')
def action1(x: str) -> Result[int, CallError]:
return Ok(int(x));
assert action1('5') == Ok(5);
result = action1('not a number');
assert isinstance(result, Err);
err = result.unwrap_err();
assert isinstance(err, CallError);
assert err.tag == 'recognise int';
assert err.errors == ['unrecognise string'];
@run_safely('recognise int')
def action2(x: str) -> Result[int, CallError]:
return Ok(int(x));
assert action2('5') == Ok(5);
result = action2('not a number');
assert isinstance(result, Err);
err = result.unwrap_err();
assert isinstance(err, CallError);
assert err.tag == 'recognise int';
assert len(err.errors) == 1;
```
NOTE: in the second example, err.errors is a list containing
the stringified Exception generated when calling `int('not a number')`.
'''
def dec(action: Callable[ARGS, Result[V, CallError]]) -> Callable[ARGS, Result[V, CallError]]:
'''
Wraps action with return type Result[..., CallError],
so that it is performed safely a promise,
catching any internal exceptions as an Err(...)-component of the Result.
'''
@wraps(action)
def wrapped_action(*_, **__) -> Result[V, CallError]:
# NOTE: intercept Exceptions first, then flatten:
return Result.of(lambda: action(*_, **__)) \
.or_else(
lambda err: Err(CallError(
tag = tag or action.__name__,
err = error_message or err
))
) \
.and_then(lambda V: V);
return wrapped_action;
return dec;

View File

@ -0,0 +1,97 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from src.thirdparty.log import *;
from src.thirdparty.types import *;
from src.core.calls import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'LOG_LEVELS',
'configure_logging',
'log_info',
'log_debug',
'log_warn',
'log_error',
'log_fatal',
'log_result',
'log_dev',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CONSTANTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_LOGGING_DEBUG_FILE: str = 'logs/debug.log';
class LOG_LEVELS(Enum): # pragma: no cover
INFO = logging.INFO;
DEBUG = logging.DEBUG;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHODS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def configure_logging(level: LOG_LEVELS): # pragma: no cover
logging.basicConfig(
format = '[\x1b[1m%(levelname)s\x1b[0m] %(message)s',
level = level.value,
);
return;
def log_debug(*messages: Any):
logging.debug(*messages);
def log_info(*messages: Any):
logging.info(*messages);
def log_warn(*messages: Any):
logging.warning(*messages);
def log_error(*messages: Any):
logging.error(*messages);
def log_fatal(*messages: Any):
logging.fatal(*messages);
exit(1);
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Special Methods
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def log_result(result: Result[CallResult, CallError], debug: bool = False):
'''
Logs safely encapsulated result of call as either debug/info or error.
@inputs
- `result` - the result of the call.
- `debug = False` (default) - if the result is okay, will be logged as an INFO message.
- `debug = True` - if the result is okay, will be logged as a DEBUG message.
'''
if isinstance(result, Ok):
value = result.unwrap();
if debug:
log_debug(asdict(value));
else:
log_info(asdict(value));
else:
err = result.unwrap_err();
log_error(asdict(err));
return;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DEBUG LOGGING FOR DEVELOPMENT
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def log_dev(*messages: Any): # pragma: no cover
with open(_LOGGING_DEBUG_FILE, 'a') as fp:
print(*messages, file=fp);

View File

@ -0,0 +1,47 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from src.thirdparty.maths import *;
from src.thirdparty.types import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'iperm',
'permute_part',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHODS permutations
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def iperm(order: List[int]) -> List[int]:
'''
Computes the inverse of a permutation.
'''
perm = list(enumerate(order));
uorder = list(map(lambda x: x[0], sorted(perm, key=lambda x: x[1])));
return uorder;
def permute_part(
x: np.ndarray,
indexes: List[int],
order: List[int],
in_place: bool = True,
) -> np.ndarray:
'''
Permutes a part of a list by a relative permutation for that part of the list.
'''
if not in_place:
x = x[:];
part = x[indexes];
part[:] = part[order];
x[indexes] = part;
return x;

View File

@ -0,0 +1,30 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.endpoints.ep_algorithm_hirschberg import *;
from src.endpoints.ep_algorithm_tarjan import *;
from src.endpoints.ep_algorithm_tsp import *;
from src.endpoints.ep_algorithm_rucksack import *;
from src.endpoints.ep_algorithm_genetic import *;
from src.endpoints.ep_algorithm_random_walk import *;
from src.endpoints.ep_algorithm_euklid import *;
from src.endpoints.ep_algorithm_pollard_rho import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'endpoint_hirschberg',
'endpoint_tarjan',
'endpoint_tsp',
'endpoint_rucksack',
'endpoint_random_walk',
'endpoint_genetic',
'endpoint_euklid',
'endpoint_pollard_rho',
];

View File

@ -0,0 +1,34 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from models.generated.commands import *;
from src.core.calls import *;
from src.setup import config;
from src.algorithms.euklid import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'endpoint_euklid',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ENDPOINT
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@run_safely()
def endpoint_euklid(command: CommandEuklid) -> Result[CallResult, CallError]:
result = euklidean_algorithm(
a = command.numbers[0].__root__,
b = command.numbers[1].__root__,
verbose = config.OPTIONS.euklid.verbose,
);
return Ok(CallResult(action_taken=True, message=result));

View File

@ -0,0 +1,34 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from models.generated.commands import *;
from src.core.calls import *;
from src.setup import config;
from src.algorithms.genetic import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'endpoint_genetic',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ENDPOINT
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@run_safely()
def endpoint_genetic(command: CommandGenetic) -> Result[CallResult, CallError]:
result = genetic_algorithm(
individual1 = command.population[0],
individual2 = command.population[1],
verbose = config.OPTIONS.genetic.verbose,
);
return Ok(CallResult(action_taken=True, message=result));

View File

@ -0,0 +1,42 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from models.generated.commands import *;
from src.core.calls import *;
from src.setup import config;
from src.algorithms.hirschberg import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'endpoint_hirschberg',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ENDPOINT
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@run_safely()
def endpoint_hirschberg(command: CommandHirschberg) -> Result[CallResult, CallError]:
if command.once:
result = simple_algorithm(
X = command.word1,
Y = command.word2,
verbose = config.OPTIONS.hirschberg.verbose,
);
else:
result = hirschberg_algorithm(
X = command.word1,
Y = command.word2,
verbose = config.OPTIONS.hirschberg.verbose,
show = config.OPTIONS.hirschberg.show,
);
return Ok(CallResult(action_taken=True, message=result));

View File

@ -0,0 +1,46 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from models.generated.commands import *;
from src.core.calls import *;
from src.setup import config;
from src.algorithms.pollard_rho import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'endpoint_pollard_rho',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ENDPOINT
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@run_safely()
def endpoint_pollard_rho(command: CommandPollard) -> Result[CallResult, CallError]:
match command.growth:
case EnumPollardGrowthRate.linear:
result = pollard_rho_algorithm_linear(
n = command.number,
x_init = command.x_init,
verbose = config.OPTIONS.pollard_rho.verbose,
);
pass;
case EnumPollardGrowthRate.exponential:
result = pollard_rho_algorithm_exponential(
n = command.number,
x_init = command.x_init,
verbose = config.OPTIONS.pollard_rho.verbose,
);
pass;
case _ as growth:
raise Exception(f'No algorithm implemented for \x1b[1m{growth.value}\x1b[0m as growth rate.');
return Ok(CallResult(action_taken=True, message=result));

View File

@ -0,0 +1,75 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from models.generated.commands import *;
from src.core.calls import *;
from src.setup import config;
from src.models.random_walk import *;
from src.algorithms.random_walk import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'endpoint_random_walk',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ENDPOINT
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@run_safely()
def endpoint_random_walk(command: CommandRandomWalk) -> Result[CallResult, CallError]:
# Compute landscape (fitness fct + topology) + initial co-ordinates:
one_based = command.one_based;
landscape = Landscape(
values = command.landscape.values,
labels = command.landscape.labels,
metric = command.landscape.neighbourhoods.metric,
one_based = one_based,
);
if isinstance(command.coords_init, list):
coords_init = tuple(command.coords_init);
if one_based:
coords_init = tuple(xx - 1 for xx in coords_init);
assert len(coords_init) == landscape.dim, 'Dimension of initial co-ordinations inconsistent with landscape!';
else:
coords_init = landscape.coords_middle;
match command.algorithm:
case EnumWalkMode.adaptive:
result = adaptive_walk_algorithm(
landscape = landscape,
r = command.landscape.neighbourhoods.radius,
coords_init = coords_init,
optimise = command.optimise,
verbose = config.OPTIONS.random_walk.verbose
);
case EnumWalkMode.gradient:
result = gradient_walk_algorithm(
landscape = landscape,
r = command.landscape.neighbourhoods.radius,
coords_init = coords_init,
optimise = command.optimise,
verbose = config.OPTIONS.random_walk.verbose
);
case EnumWalkMode.metropolis:
result = metropolis_walk_algorithm(
landscape = landscape,
r = command.landscape.neighbourhoods.radius,
coords_init = coords_init,
T = command.temperature_init,
annealing = command.annealing,
optimise = command.optimise,
verbose = config.OPTIONS.random_walk.verbose
);
case _ as alg:
raise Exception(f'No algorithm implemented for {alg.value}.');
return Ok(CallResult(action_taken=True, message=result));

View File

@ -0,0 +1,54 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from src.thirdparty.maths import *;
from models.generated.commands import *;
from src.core.calls import *;
from src.setup import config;
from src.algorithms.rucksack import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'endpoint_rucksack',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ENDPOINT
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@run_safely()
def endpoint_rucksack(command: CommandRucksack) -> Result[CallResult, CallError]:
n = len(command.costs);
assert len(command.values) == n, 'Number of values and costs must coincide!';
assert len(command.items) in [0, n], f'Number of items must be 0 or {n}!';
command.items = command.items or [ str(index + 1) for index in range(n) ];
match command.algorithm:
case EnumRucksackAlgorithm.greedy:
result = rucksack_greedy_algorithm(
max_cost = command.max_cost,
costs = np.asarray(command.costs[:]),
values = np.asarray(command.values[:]),
items = np.asarray(command.items[:]),
fractional = command.allow_fractional,
verbose = config.OPTIONS.rucksack.verbose,
);
case EnumRucksackAlgorithm.branch_and_bound:
result = rucksack_branch_and_bound_algorithm(
max_cost = command.max_cost,
costs = np.asarray(command.costs[:]),
values = np.asarray(command.values[:]),
items = np.asarray(command.items[:]),
verbose = config.OPTIONS.rucksack.verbose,
);
case _ as alg:
raise Exception(f'No algorithm implemented for {alg.value}.');
return Ok(CallResult(action_taken=True, message=result));

View File

@ -0,0 +1,37 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from models.generated.commands import *;
from src.core.calls import *;
from src.setup import config;
from src.models.graphs import *;
from src.algorithms.tarjan import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'endpoint_tarjan',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ENDPOINT
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@run_safely()
def endpoint_tarjan(command: CommandTarjan) -> Result[CallResult, CallError]:
result = tarjan_algorithm(
G = Graph(
nodes=command.nodes,
edges=list(map(tuple, command.edges)),
),
verbose = config.OPTIONS.tarjan.verbose
);
return Ok(CallResult(action_taken=True, message=result));

View File

@ -0,0 +1,35 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.code import *;
from src.thirdparty.maths import *;
from models.generated.commands import *;
from src.core.calls import *;
from src.setup import config;
from src.algorithms.tsp import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'endpoint_tsp',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ENDPOINT
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@run_safely()
def endpoint_tsp(command: CommandTsp) -> Result[CallResult, CallError]:
result = tsp_algorithm(
dist = np.asarray(command.dist, dtype=float),
optimise = min if command.optimise == EnumOptimiseMode.min else max,
verbose = config.OPTIONS.tsp.verbose,
);
return Ok(CallResult(action_taken=True, message=result));

View File

View File

@ -0,0 +1,19 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.models.config.app import *;
from src.models.config.commands import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'log_level',
'command_from_json',
'interpret_command',
];

View File

@ -0,0 +1,31 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from models.generated.config import AppOptions;
from models.generated.config import EnumLogLevel;
from src.core.log import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'log_level',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHODS log level
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def log_level(options: AppOptions) -> LOG_LEVELS:
match options.log_level:
case EnumLogLevel.debug:
return LOG_LEVELS.DEBUG;
case EnumLogLevel.info:
return LOG_LEVELS.INFO;
case _:
return LOG_LEVELS.INFO;

View File

@ -0,0 +1,55 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.config import *;
from models.generated.commands import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'command_from_json',
'interpret_command',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# METHODS Convert to appropriate command type
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def command_from_json(command_json: str) -> Command:
try:
instructions = json.loads(command_json);
except:
raise Exception('Invalid json!');
try:
command = Command(**instructions);
except:
raise Exception('Invalid instruction format - consult schema!');
command = interpret_command(command);
return command;
def interpret_command(command: Command) -> Command:
match command.name:
case EnumAlgorithmNames.tarjan:
return CommandTarjan(**command.dict());
case EnumAlgorithmNames.tsp:
return CommandTsp(**command.dict());
case EnumAlgorithmNames.hirschberg:
return CommandHirschberg(**command.dict());
case EnumAlgorithmNames.rucksack:
return CommandRucksack(**command.dict());
case EnumAlgorithmNames.random_walk:
return CommandRandomWalk(**command.dict());
case EnumAlgorithmNames.genetic:
return CommandGenetic(**command.dict());
case EnumAlgorithmNames.euklid:
return CommandEuklid(**command.dict());
case EnumAlgorithmNames.pollard_rho:
return CommandPollard(**command.dict());
raise Exception(f'Command type `{command.name.value}` not recognised!');

View File

@ -0,0 +1,16 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.models.euklid.logging import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'Step',
];

View File

@ -0,0 +1,30 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'Step',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CLASS Step
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@dataclass
class Step():
a: int = field();
b: int = field();
gcd: int = field();
div: int = field();
rem: int = field();
coeff_a: int = field();
coeff_b: int = field();

View File

@ -0,0 +1,16 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.models.graphs.graph import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'Graph',
];

View File

@ -0,0 +1,62 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from __future__ import annotations;
from models.generated.commands import *;
from src.thirdparty.types import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'Graph',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CLASS Graph
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
class Graph(object):
'''
a data structure for graphs
'''
nodes: list[Any];
edges: list[tuple[Any,Any]]
def __init__(self, nodes: list[Any], edges: list[Tuple[Any, Any]]):
assert all(len(edge) == 2 for edge in edges);
self.nodes = nodes;
self.edges = edges;
return;
def __len__(self) -> int:
return len(self.nodes);
def subgraph(self, nodes: list[Any]) -> Graph:
'''
@returns graph induced by subset of nodes
'''
return Graph(
nodes = [ u for u in self.nodes if u in nodes ],
edges = [ (u, v) for u, v in self.edges if u in nodes and v in nodes ],
);
def successors(self, u: str):
'''
@returns
list of successor nodes
'''
return [ v for (u_, v) in self.edges if u == u_ ];
def predecessors(self, v: str):
'''
@returns
list of predecessor nodes
'''
return [ u for (u, v_) in self.edges if v == v_ ];

View File

@ -0,0 +1,23 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.models.hirschberg.alignment import *;
from src.models.hirschberg.paths import *;
from src.models.hirschberg.penalties import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'Alignment',
'AlignmentBasic',
'AlignmentPair',
'Directions',
'gap_penalty',
'missmatch_penalty',
];

View File

@ -0,0 +1,112 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from __future__ import annotations;
from src.thirdparty.types import *;
from src.thirdparty.maths import *;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'Alignment',
'AlignmentBasic',
'AlignmentPair',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Class Alignments
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
class Alignment():
@property
def parts1(self) -> List[str]:
if isinstance(self, AlignmentBasic):
return [self.word1];
elif isinstance(self, AlignmentPair):
return self.left.parts1 + self.right.parts1;
return [];
@property
def parts2(self) -> List[str]:
if isinstance(self, AlignmentBasic):
return [self.word2];
elif isinstance(self, AlignmentPair):
return self.left.parts2 + self.right.parts2;
return [];
def astree(
self,
indent: str = ' ',
prefix: str = '',
braces: bool = False,
branch: str = ' └──── ',
) -> str:
return '\n'.join(list(self._astree_recursion(indent=indent, prefix=prefix, braces=braces, branch=branch)));
def _astree_recursion(
self,
depth: int = 0,
indent: str = ' ',
prefix: str = '',
braces: bool = False,
branch: str = ' └──── ',
branch_atom: str = '˚└──── ',
) -> Generator[str, None, None]:
word1 = self.as_string1(braces=braces);
word2 = self.as_string2(braces=braces);
if isinstance(self, AlignmentBasic):
u = prefix + branch_atom if depth > 0 else prefix;
yield f'{u}{word2}';
if depth == 0:
yield f'{" "*len(u)}{"-"*len(word1)}';
yield f'{" "*len(u)}{word1}';
elif isinstance(self, AlignmentPair):
u = prefix + branch if depth > 0 else prefix;
yield f'{u}{word2}';
if depth == 0:
yield f'{" "*len(u)}{"-"*len(word1)}';
yield f'{" "*len(u)}{word1}';
yield f'{indent}{prefix}';
yield from self.left._astree_recursion(
depth = depth + 1,
indent = indent,
prefix = indent + prefix,
braces = braces,
branch = branch,
);
yield f'{indent}{prefix}';
yield from self.right._astree_recursion(
depth = depth + 1,
indent = indent,
prefix = indent + prefix,
braces = braces,
branch = branch,
);
return;
def as_string1(self, braces: bool = False) -> Tuple[str, str]:
if braces:
return f'({")(".join(self.parts1)})';
return ''.join(self.parts1);
def as_string2(self, braces: bool = False,) -> Tuple[str, str]:
if braces:
return f'({")(".join(self.parts2)})';
return ''.join(self.parts2);
@dataclass
class AlignmentBasic(Alignment):
word1: str = field();
word2: str = field();
@dataclass
class AlignmentPair(Alignment):
left: Alignment = field();
right: Alignment = field();

View File

@ -0,0 +1,28 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# IMPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
from src.thirdparty.types import *;
from src.setup import config;
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'Directions',
];
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ENUMS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
class Directions(Enum):
UNSET = -1;
# Prioritäten hier setzen
DIAGONAL = config.OPTIONS.hirschberg.move_priorities.diagonal;
HORIZONTAL = config.OPTIONS.hirschberg.move_priorities.horizontal;
VERTICAL = config.OPTIONS.hirschberg.move_priorities.vertical;

Some files were not shown because too many files have changed in this diff Show More