Introduction of using the package rethon#

How to run this notebook#

There are several possibilities to execute this notebook. You can, for instance,

  1. execute this notebook on Colab: Open In Colab, or

  2. execute this notebook locally in, for instance, JupyterLab. You can download this notebook from here.

Installing libraries#

[ ]:
%pip install rethon

Using the model#

We distinguish between two different types of RE processes:

  1. Globally searching RE processes: In each step the process optimizes the achievement function by considering all positions (either as theory candidates or commitments candidates).

  2. Locally searching RE processes: In each step the process optimizes the achievement function by considering positions that are in the neighbourhood of the current state.

Accordingly, there are different RE classes to be used.

Remark:

  • The optimization w.r.t. a global search is computationally complex. At the moment, these processes converge in a reasonable time if the sentence pool is rather small (\(n< 10\)).

Additionally, there are two types of dialectical structures:

  • DAG (directed acyclic graphs) based dialectical structures: All important properties of the stuctrue are calculated once and then stored. This representation is fast for smaller sentence pools and should be used in combination with globally searching RE processes.

  • BDD (binary decision diagramm) based dialectical structures: Important properties of the structure are calculated by using binary decision trees. This representation is comparably fast for most properties of the graph even if the sentence pool is larger (\(n>10\)). However, for larger sentence it will become difficult to calculate all dialectically consistent positions, axiomatic bases (without a confining source) and minimal positions.

Accordingly, we advise to use DAG based dialectical structures for globally searching RE processes and BDD based dialectical structures for locally searching RE processes.

Globally searching RE processes#

[1]:
from theodias import StandardPosition, DAGDialecticalStructure
from rethon import StandardGlobalReflectiveEquilibrium
from pprint import pprint

Instantiating the example in BBB (2021) with a sentence pool \(n=7\) as a DAG based dialectical structure:

[2]:
# the standard example with a sentence pool n=7
n = 7
arguments = [[1, 3],[1, 4],[1, 5],[1, -6], [2, -4],[2, 5],[2, 6],[2, 7]]
dag_ds = DAGDialecticalStructure.from_arguments(arguments, n)
global_re = StandardGlobalReflectiveEquilibrium(dag_ds)

Initializing a globally searching RE process with initial commitments \(\mathcal{C}_0=\{3,4,5\}\) and running the model:

[3]:
init_coms = StandardPosition.from_set({3, 4, 5}, n)
global_re.set_initial_state(init_coms)
global_re.re_process()

Showing the results. Here, evolution represents the sucession of RE states \(C_0, T_0, C_1, T_2, \dots , C_{final}, T_{final}\).

[4]:
pprint(global_re.state().as_dict())
{'alternatives': [set(), set(), set(), set(), set(), set()],
 'evolution': [{3, 4, 5},
               {1},
               {1, 3, 4, 5, -6, -2},
               {1},
               {1, 3, 4, 5, -6, -2},
               {1}],
 'finished': True,
 'time_line': [0, 1, 2, 3, 4, 5]}

There are some convenience methods to show different aspects of the result:

[5]:
print(f'Initial commitments: {global_re.state().initial_commitments()}')
print(f'Theory evolution: {global_re.state().theory_evolution()}')
print(f'Commitments evolution: {global_re.state().commitments_evolution()}')
Initial commitments: {3, 4, 5}
Theory evolution: [{1}, {1}, {1}]
Commitments evolution: [{3, 4, 5}, {1, 3, 4, 5, -6, -2}, {1, 3, 4, 5, -6, -2}]

Branching#

The standard model searches for each step commitments or theories respectively, which optimize an achievement function of the epistemic state. If different positions compare equally well with regard to this functin, the standard model choses the next position randomly among these positions. The different possibilities for a specific model run are stored in the alternatives field of the RE state.

However, you can also calculate all different path such a process can take given this kind of underdetermination by using a process container in the following way:

[6]:
from rethon import FullBranchREContainer

init_coms = StandardPosition.from_set({3, 4, 5, 6, 7}, n)
global_re.set_initial_state(init_coms)
# A process container that will run all possible paths the re process can take
re_container = FullBranchREContainer()
branches = re_container.result_states(global_re)

which will return all branches as RE states:

[7]:
pprint([state.as_dict() for state in branches])
[{'alternatives': [set(), {{2, 3}}, set(), set(), set(), set(), set(), set()],
  'evolution': [{3, 4, 5, 6, 7},
                {1, 7},
                {1, 3, 4, 5, 7, -6, -2},
                {1},
                {1, 3, 4, 5, -6, -2},
                {1},
                {1, 3, 4, 5, -6, -2},
                {1}],
  'finished': True,
  'time_line': [0, 1, 2, 3, 4, 5, 6, 7]},
 {'alternatives': [set(), {{1, 7}}, set(), set(), set(), set(), set(), set()],
  'evolution': [{3, 4, 5, 6, 7},
                {2, 3},
                {2, 3, 5, 6, 7, -4, -1},
                {2},
                {2, 5, 6, 7, -4, -1},
                {2},
                {2, 5, 6, 7, -4, -1},
                {2}],
  'finished': True,
  'time_line': [0, 1, 2, 3, 4, 5, 6, 7]}]

Locally searching RE processes#

If you want to use locally searching RE processes, simply use BDDDialecticalStructures and StandardLocalReflectiveEquilibrium in the same way as above:

[8]:
from theodias import StandardPosition, BDDDialecticalStructure
from rethon import StandardLocalReflectiveEquilibrium
from pprint import pprint

# our standard example with a sentence pool n=7
n = 7
arguments = [[1, 3],[1, 4],[1, 5],[1, -6], [2, -4],[2, 5],[2, 6],[2, 7]]
bdd_ds = BDDDialecticalStructure.from_arguments(arguments, n)
init_coms = StandardPosition.from_set({3, 4, 5}, n)
local_re = StandardLocalReflectiveEquilibrium(bdd_ds, init_coms)
local_re.set_initial_state(init_coms)
local_re.re_process()
pprint(local_re.state().as_dict())
{'alternatives': [set(),
                  set(),
                  {{-6, 3, 4, 5}, {1, 3, 4, 5}},
                  set(),
                  {{3, 4, 5, -6, -2}},
                  set(),
                  set(),
                  set(),
                  set(),
                  set()],
 'evolution': [{3, 4, 5},
               {1},
               {3, 4, 5, -2},
               {1},
               {1, 3, 4, 5, -2},
               {1},
               {1, 3, 4, 5, -6, -2},
               {1},
               {1, 3, 4, 5, -6, -2},
               {1}],
 'finished': True,
 'time_line': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]}

Model Parameters#

The standard model has different model parameters, which are initiated by default values:

[9]:
pprint('Model parameters of globally searching REs:')
pprint(global_re.model_parameters())
pprint('Model parameters of locally searching REs:')
pprint(local_re.model_parameters())
'Model parameters of globally searching REs:'
{'account_penalties': [0.0, 0.3, 1.0, 1.0],
 'faithfulness_penalties': [0.0, 0.0, 1.0, 1.0],
 'weights': {'account': 0.35, 'faithfulness': 0.1, 'systematicity': 0.55}}
'Model parameters of locally searching REs:'
{'account_penalties': [0.0, 0.3, 1.0, 1.0],
 'faithfulness_penalties': [0.0, 0.0, 1.0, 1.0],
 'neighbourhood_depth': 1,
 'weights': {'account': 0.35, 'faithfulness': 0.1, 'systematicity': 0.55}}

and which can be set to different values:

[10]:
local_re.set_model_parameters(neighbourhood_depth = 7)
# rerunning the model
local_re.re_process()
pprint(local_re.state().as_dict())
{'alternatives': [set(), set(), set(), set(), set(), set()],
 'evolution': [{3, 4, 5},
               {1},
               {1, 3, 4, 5, -6, -2},
               {1},
               {1, 3, 4, 5, -6, -2},
               {1}],
 'finished': True,
 'time_line': [0, 1, 2, 3, 4, 5]}

Export to JSON#

De-/serializing rethon objects#

You can serialize and deserialize theodias positions, dialectical structure, rethon re states and whole model runs as well as any compounds thereof as long as the json python module can handle them (e.g., lists, dictionaries). For more details, consult the theodias tutorial (👉 link).

For instance, the following code will serialize a model run:

[11]:
from theodias import StandardPosition, DAGDialecticalStructure
from rethon import StandardGlobalReflectiveEquilibrium
from rethon.util import rethon_dumps
from pprint import pprint
# our standard example with a sentence pool n=7
n = 7
arguments = [[1, 3],[1, 4],[1, 5],[1, -6], [2, -4],[2, 5],[2, 6],[2, 7]]
dag_ds = DAGDialecticalStructure.from_arguments(arguments, n)
global_re = StandardGlobalReflectiveEquilibrium(dag_ds)
init_coms = StandardPosition.from_set({3, 4, 5}, n)
global_re.set_initial_state(init_coms)
global_re.re_process()

# serializing a model run as JSON String
re_run_json_str = rethon_dumps(global_re,
                               indent=4)
print(re_run_json_str)
{
    "model_name": "StandardGlobalReflectiveEquilibrium",
    "dialectical_structure": {
        "arguments": [
            [
                1,
                3
            ],
            [
                1,
                4
            ],
            [
                1,
                5
            ],
            [
                1,
                -6
            ],
            [
                2,
                -4
            ],
            [
                2,
                5
            ],
            [
                2,
                6
            ],
            [
                2,
                7
            ]
        ],
        "tau_name": null,
        "n_unnegated_sentence_pool": 7
    },
    "model_parameters": {
        "weights": {
            "account": 0.35,
            "systematicity": 0.55,
            "faithfulness": 0.1
        },
        "account_penalties": [
            0.0,
            0.30000001192092896,
            1.0,
            1.0
        ],
        "faithfulness_penalties": [
            0.0,
            0.0,
            1.0,
            1.0
        ]
    },
    "state": {
        "finished": true,
        "evolution": [
            {
                "n_unnegated_sentence_pool": 7,
                "position": [
                    3,
                    4,
                    5
                ]
            },
            {
                "n_unnegated_sentence_pool": 7,
                "position": [
                    1
                ]
            },
            {
                "n_unnegated_sentence_pool": 7,
                "position": [
                    1,
                    3,
                    4,
                    5,
                    -6,
                    -2
                ]
            },
            {
                "n_unnegated_sentence_pool": 7,
                "position": [
                    1
                ]
            },
            {
                "n_unnegated_sentence_pool": 7,
                "position": [
                    1,
                    3,
                    4,
                    5,
                    -6,
                    -2
                ]
            },
            {
                "n_unnegated_sentence_pool": 7,
                "position": [
                    1
                ]
            }
        ],
        "alternatives": [
            [],
            [],
            [],
            [],
            [],
            []
        ],
        "time_line": [
            0,
            1,
            2,
            3,
            4,
            5
        ]
    }
}

Extending the model#

If you want to do more than just adjust model parameters, you can alter and expand the model in different ways. The basic idea is always the same: Write an own reflective equilibrium class and overwrite methods to you own needs.

An RE process is a succession of positions, starting with an initial position \(\mathcal{C_0}\):

\[\mathcal{C_0} \rightarrow \mathcal{T_0} \rightarrow \mathcal{C_1} \rightarrow \mathcal{T_1} \rightarrow \dots \rightarrow \mathcal{T_{final}} \rightarrow \mathcal{C_{final}}\]

Accordingly, there are two different sorts of revisions: (i) adopting a new theory and (ii) adopting new commitments. The adoption of new theories and commitments is determined by two criteria for theories and two criteria for commitments:

  1. A theory-candidates criterion \(TC\) determines theory candidates \(TC_{i+1}=\{\mathcal{T}^{i+1}_1, \dots \mathcal{T}^{i+1}_n\}\). This criterion can in principle take all past steps \(\mathcal{C_0}, \mathcal{T_0}, \mathcal{C_1}, \mathcal{T_1}, \dots, \mathcal{T_{i}}, \mathcal{C_{i}}\) into account.

  2. An additional criterion chooses among those candidates the next theory: \(\{\mathcal{T}^{i+1}_1, \dots \mathcal{T}^{i+1}_n\} \rightarrow \mathcal{T_{i+1}}\).

  3. A commitments-candidates criterion \(CC\) determines commitments candidates \(CC_{i+1}=\{\mathcal{C}^{i+1}_1, \dots \mathcal{C}^{i+1}_n\}\). This criterion can in principle take all past steps \(\mathcal{C_0}, \mathcal{T_0}, \mathcal{C_1}, \mathcal{T_1}, \dots, \mathcal{T_{i}}, \mathcal{C_{i},\mathcal{T_{i+1}} }\) into account.

  4. An additional criterion chooses among those candidates the commitment: \(\{\mathcal{C}^{i+1}_1, \dots \mathcal{C}^{i+1}_n\} \rightarrow \mathcal{C_{i+1}}\).

Finally, there is a stop criterion that specifies under which conditions a re process is considered as finished.

Extending the standard model by adjusting the achievment function#

The standard model choses the next theory \(\mathcal{T_i}\) and the next commitments \(\mathcal{C_i}\) respectively by optimizing an achievement function

\[Z(\mathcal{C},\mathcal{T} | \mathcal{C}_0):= \alpha_A A(\mathcal{C}, \mathcal{T})+ \alpha_S S(\mathcal{T}) + \alpha_F F(\mathcal{C}| \mathcal{C}_0)\]
  1. Adopting a new theory: Choose a theory \(\mathcal{T_{i+1}}\) that maximizes \(Z(\mathcal{C_i},\mathcal{T_{i+1}} | \mathcal{C}_0)\). If there are different maximizing new theories choose randomly among them, except the last theory is among them. In that case choose \(\mathcal{T_{i+1}} = \mathcal{T_{i}}\).

  2. Adopting new commitments: Choose new commitments \(\mathcal{T_{i+1}}\) that maximize \(Z(\mathcal{C_{i+1}},\mathcal{T_{i+1}} | \mathcal{C}_0)\). If there are different maximizing new theories choose randomly among them, except the last theory is among them. In that case choose \(\mathcal{T_{i+1}} = \mathcal{T_{i}}\).

A simple way, to adapt the model is to change the functions \(A\), \(F\), \(S\) or the achievement function as a whole.

For instance, the standard model uses a quadratic term for calculating account:

\[A(\mathcal{C}, \mathcal{T}):=\left( 1-\left(\frac{D_{0,0.3,1,1}(\mathcal{C}, \overline{\mathcal{T}})}{N}\right)^2 \right)\]

If you want to get rid of the quadratic form and use instead:

\[A(\mathcal{C}, \mathcal{T}):=\left( 1-\frac{D_{0,0.3,1,1}(\mathcal{C}, \overline{\mathcal{T}})}{N} \right)\]

you can simply overwrite the account function of the standard model in the following way:

[12]:
from theodias import StandardPosition, DAGDialecticalStructure
from rethon import StandardGlobalReflectiveEquilibrium

class NewAccountReflectiveEquilibrium(StandardGlobalReflectiveEquilibrium):
    def account(self, commitments, theory) -> float:
        return 1 - (self.hamming_distance(commitments,
                                          self.dialectical_structure().closure(theory),
                                          self.model_parameter("account_penalties"))
                    /self.dialectical_structure().sentence_pool().size())

And use this class as reflective equilibrium class:

[13]:
from pprint import pprint

# our standard example with a sentence pool n=7
n = 7
arguments = [[1, 3],[1, 4],[1, 5],[1, -6], [2, -4],[2, 5],[2, 6],[2, 7]]
bdd_ds = DAGDialecticalStructure.from_arguments(arguments, n)
new_re = NewAccountReflectiveEquilibrium(bdd_ds)

init_coms = StandardPosition.from_set({3, 4, 5}, n)
new_re.set_initial_state(init_coms)
new_re.re_process()
pprint(new_re.state().as_dict())
{'alternatives': [set(), set(), set(), set(), set(), set()],
 'evolution': [{3, 4, 5},
               {1},
               {1, 3, 4, 5, -6, -2},
               {1},
               {1, 3, 4, 5, -6, -2},
               {1}],
 'finished': True,
 'time_line': [0, 1, 2, 3, 4, 5]}

Extending the standard model by redefining the candidates criteria#

The standard model uses the achievement function to determine the next commitments and theory candiates. In particular, the candidates for the next theory \(\mathcal{T}_{i+1}\) depend only on \(\mathcal{C}_{i}\) and \(\mathcal{C}_{0}\) and the candidates for the next commitments \(\mathcal{C}_{i+1}\) depend only on \(\mathcal{T}_{i+1}\) and \(\mathcal{C}_{0}\).

Suppose, you want to change this behaviour. For instance, you might prefer to take other preliminary states into account. Just overwriting the achievement function (or its constituents) won’t do that because you can not change which commitments and which theory will be used to calculate the achievement. However, you can overwrite the criteria that determine commitments and theory condidates directly.

Suppose now, that you want to choose next commitment candidates by maximising the achievement function with respect the last commitments insteads of the initial commitments: That is, instead of maximising \(Z(\mathcal{C_{i+1}},\mathcal{T_{i+1}} | \mathcal{C}_0)\) you want to maximise \(Z(\mathcal{C_{i+1}},\mathcal{T_{i+1}} | \mathcal{C}_{i+1})\).

To do that, you can simply overwrite the functions that determines the commitments candidates in the following way:

[14]:
from theodias import StandardPosition, DAGDialecticalStructure
from rethon import StandardGlobalReflectiveEquilibrium


class MarkovianGlobalReflectiveEquilibrium(StandardGlobalReflectiveEquilibrium):

    def commitment_candidates(self, **kwargs):
        candidate_commitments = set()
        max_achievement = 0
        for candidate_commitment in self.dialectical_structure().minimally_consistent_positions():
            current_achievement = self.achievement(candidate_commitment,
                                                   self.state().last_theory(),
                                                   # instead of the initial commitments we calculate achievement
                                                   # w.r.t. the last commitments
                                                   self.state().last_commitments())

            # update achievement and candidates
            if current_achievement > max_achievement:
                candidate_commitments = {candidate_commitment}
                max_achievement = current_achievement

            elif current_achievement == max_achievement:
                candidate_commitments.add(candidate_commitment)
        # in case the last state is already optimal, we just return it
        if self.state().last_commitments() in candidate_commitments:
            return {self.state().last_commitments()}

        return candidate_commitments

[15]:
from pprint import pprint

# our standard example with a sentence pool n=7
n = 7
arguments = [[1, 3],[1, 4],[1, 5],[1, -6], [2, -4],[2, 5],[2, 6],[2, 7]]
bdd_ds = DAGDialecticalStructure.from_arguments(arguments, n)
new_re = MarkovianGlobalReflectiveEquilibrium(bdd_ds)

init_coms = StandardPosition.from_set({3, 4, 5}, n)
new_re.set_initial_state(init_coms)
new_re.re_process()
pprint(new_re.state().as_dict())
{'alternatives': [set(), set(), set(), set(), set(), set()],
 'evolution': [{3, 4, 5},
               {1},
               {1, 3, 4, 5, -6, -2},
               {1},
               {1, 3, 4, 5, -6, -2},
               {1}],
 'finished': True,
 'time_line': [0, 1, 2, 3, 4, 5]}

In the same way you could adapt one of the following methods by overwriting ReflectiveEquilibrium or classes that implement it:

  • pick_commitment_candidate: The condition that chooses one of the commitments candidates that is determined by the commitments-candidates criterion \(CC\) (commitment_candidates) as the next commitments.

  • theory_candidates The theory-candidates criterion \(CT\).

  • pick_theory_candidate: The condition that chooses one of the theory candidates that is determined by the theory-candidates criterion \(TC\) as the next theory.

  • finished: The criterion that determines when the the process ends.