Skip to content

API Reference

This reference provides detailed documentation for all EasyScience classes and functions.

Core Variables and Descriptors

Descriptor Base Classes

easyscience.variable.DescriptorBase

Bases: SerializerComponent

This is the base of all variable descriptions for models. It contains all information to describe a single unique property of an object. This description includes a name and value as well as optionally a unit, description and url (for reference material). Also implemented is a callback so that the value can be read/set from a linked library object.

A Descriptor is typically something which describes part of a model and is non-fittable and generally changes the state of an object.

Source code in src/easyscience/variable/descriptor_base.py
class DescriptorBase(SerializerComponent, metaclass=abc.ABCMeta):
    """
    This is the base of all variable descriptions for models. It contains all information to describe a single
    unique property of an object. This description includes a name and value as well as optionally a unit, description
    and url (for reference material). Also implemented is a callback so that the value can be read/set from a linked
    library object.

    A `Descriptor` is typically something which describes part of a model and is non-fittable and generally changes the
    state of an object.
    """

    _global_object = global_object
    # Used by serializer
    _REDIRECT = {'parent': None}

    def __init__(
        self,
        name: str,
        unique_name: Optional[str] = None,
        description: Optional[str] = None,
        url: Optional[str] = None,
        display_name: Optional[str] = None,
        parent: Optional[Any] = None,
    ):
        """
        This is the base of variables for models. It contains all information to describe a single
        unique property of an object. This description includes a name, description and url (for reference material).

        A `Descriptor` is typically something which describes part of a model and is non-fittable and generally changes
        the state of an object.

        :param name: Name of this object
        :param description: A brief summary of what this object is
        :param url: Lookup url for documentation/information
        :param display_name: A pretty name for the object
        :param parent: The object which this descriptor is attached to

        .. note:: Undo/Redo functionality is implemented for the attributes `name` and `display name`.
        """

        if unique_name is None:
            unique_name = global_object.generate_unique_name(self.__class__.__name__)
        self._unique_name = unique_name

        if not isinstance(name, str):
            raise TypeError('Name must be a string')
        self._name: str = name

        if display_name is not None and not isinstance(display_name, str):
            raise TypeError('Display name must be a string or None')
        self._display_name: str = display_name

        if description is not None and not isinstance(description, str):
            raise TypeError('Description must be a string or None')
        if description is None:
            description = ''
        self._description: str = description

        if url is not None and not isinstance(url, str):
            raise TypeError('url must be a string')
        if url is None:
            url = ''
        self._url: str = url

        # Let the collective know we've been assimilated
        self._parent = parent
        global_object.map.add_vertex(self, obj_type='created')
        # Make the connection between self and parent
        if parent is not None:
            global_object.map.add_edge(parent, self)

    @property
    def name(self) -> str:
        """
        Get the name of the object.

        :return: name of the object.
        """
        return self._name

    @name.setter
    @property_stack
    def name(self, new_name: str) -> None:
        """
        Set the name.

        :param new_name: name of the object.
        """
        if not isinstance(new_name, str):
            raise TypeError('Name must be a string')
        self._name = new_name

    @property
    def display_name(self) -> str:
        """
        Get a pretty display name.

        :return: The pretty display name.
        """
        display_name = self._display_name
        if display_name is None:
            display_name = self._name
        return display_name

    @display_name.setter
    @property_stack
    def display_name(self, name: str) -> None:
        """
        Set the pretty display name.

        :param name: Pretty display name of the object.
        """
        if name is not None and not isinstance(name, str):
            raise TypeError('Display name must be a string or None')
        self._display_name = name

    @property
    def description(self) -> str:
        """
        Get the description of the object.

        :return: description of the object.
        """
        return self._description

    @description.setter
    def description(self, description: str) -> None:
        """
        Set the description of the object.

        :param description: description of the object.
        """
        if description is not None and not isinstance(description, str):
            raise TypeError('Description must be a string or None')
        self._description = description

    @property
    def url(self) -> str:
        """
        Get the url of the object.

        :return: url of the object.
        """
        return self._url

    @url.setter
    def url(self, url: str) -> None:
        """
        Set the url of the object.

        :param url: url of the object.
        """
        if url is not None and not isinstance(url, str):
            raise TypeError('url must be a string')
        self._url = url

    @property
    def unique_name(self) -> str:
        """
        Get the unique name of this object.

        :return: Unique name of this object
        """
        return self._unique_name

    @unique_name.setter
    def unique_name(self, new_unique_name: str):
        """Set a new unique name for the object. The old name is still kept in the map.

        :param new_unique_name: New unique name for the object"""
        if not isinstance(new_unique_name, str):
            raise TypeError('Unique name has to be a string.')
        self._unique_name = new_unique_name
        global_object.map.add_vertex(self)

    @property
    @abc.abstractmethod
    def value(self) -> Any:
        """Get the value of the object."""

    @value.setter
    @abc.abstractmethod
    def value(self, value: Any) -> None:
        """Set the value of the object."""

    @abc.abstractmethod
    def __repr__(self) -> str:
        """Return printable representation of the object."""

    def __copy__(self) -> DescriptorBase:
        """Return a copy of the object."""
        temp = self.as_dict(skip=['unique_name'])
        new_obj = self.__class__.from_dict(temp)
        return new_obj

_global_object class-attribute instance-attribute

_global_object = global_object

_REDIRECT class-attribute instance-attribute

_REDIRECT = {'parent': None}

_unique_name instance-attribute

_unique_name = unique_name

_name instance-attribute

_name = name

_display_name instance-attribute

_display_name = display_name

_description instance-attribute

_description = description

_url instance-attribute

_url = url

_parent instance-attribute

_parent = parent

name property writable

name

Get the name of the object.

:return: name of the object.

display_name property writable

display_name

Get a pretty display name.

:return: The pretty display name.

description property writable

description

Get the description of the object.

:return: description of the object.

url property writable

url

Get the url of the object.

:return: url of the object.

unique_name property writable

unique_name

Get the unique name of this object.

:return: Unique name of this object

value abstractmethod property writable

value

Get the value of the object.

__init__

__init__(
    name,
    unique_name=None,
    description=None,
    url=None,
    display_name=None,
    parent=None,
)

This is the base of variables for models. It contains all information to describe a single unique property of an object. This description includes a name, description and url (for reference material).

A Descriptor is typically something which describes part of a model and is non-fittable and generally changes the state of an object.

:param name: Name of this object :param description: A brief summary of what this object is :param url: Lookup url for documentation/information :param display_name: A pretty name for the object :param parent: The object which this descriptor is attached to

.. note:: Undo/Redo functionality is implemented for the attributes name and display name.

Source code in src/easyscience/variable/descriptor_base.py
def __init__(
    self,
    name: str,
    unique_name: Optional[str] = None,
    description: Optional[str] = None,
    url: Optional[str] = None,
    display_name: Optional[str] = None,
    parent: Optional[Any] = None,
):
    """
    This is the base of variables for models. It contains all information to describe a single
    unique property of an object. This description includes a name, description and url (for reference material).

    A `Descriptor` is typically something which describes part of a model and is non-fittable and generally changes
    the state of an object.

    :param name: Name of this object
    :param description: A brief summary of what this object is
    :param url: Lookup url for documentation/information
    :param display_name: A pretty name for the object
    :param parent: The object which this descriptor is attached to

    .. note:: Undo/Redo functionality is implemented for the attributes `name` and `display name`.
    """

    if unique_name is None:
        unique_name = global_object.generate_unique_name(self.__class__.__name__)
    self._unique_name = unique_name

    if not isinstance(name, str):
        raise TypeError('Name must be a string')
    self._name: str = name

    if display_name is not None and not isinstance(display_name, str):
        raise TypeError('Display name must be a string or None')
    self._display_name: str = display_name

    if description is not None and not isinstance(description, str):
        raise TypeError('Description must be a string or None')
    if description is None:
        description = ''
    self._description: str = description

    if url is not None and not isinstance(url, str):
        raise TypeError('url must be a string')
    if url is None:
        url = ''
    self._url: str = url

    # Let the collective know we've been assimilated
    self._parent = parent
    global_object.map.add_vertex(self, obj_type='created')
    # Make the connection between self and parent
    if parent is not None:
        global_object.map.add_edge(parent, self)

__repr__ abstractmethod

__repr__()

Return printable representation of the object.

Source code in src/easyscience/variable/descriptor_base.py
@abc.abstractmethod
def __repr__(self) -> str:
    """Return printable representation of the object."""

__copy__

__copy__()

Return a copy of the object.

Source code in src/easyscience/variable/descriptor_base.py
def __copy__(self) -> DescriptorBase:
    """Return a copy of the object."""
    temp = self.as_dict(skip=['unique_name'])
    new_obj = self.__class__.from_dict(temp)
    return new_obj

easyscience.variable.DescriptorNumber

Bases: DescriptorBase

A Descriptor for Number values with units. The internal representation is a scipp scalar.

Source code in src/easyscience/variable/descriptor_number.py
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
class DescriptorNumber(DescriptorBase):
    """
    A `Descriptor` for Number values with units.  The internal representation is a scipp scalar.
    """

    def __init__(
        self,
        name: str,
        value: numbers.Number,
        unit: Optional[Union[str, sc.Unit]] = '',
        variance: Optional[numbers.Number] = None,
        unique_name: Optional[str] = None,
        description: Optional[str] = None,
        url: Optional[str] = None,
        display_name: Optional[str] = None,
        parent: Optional[Any] = None,
        **kwargs: Any,  # Additional keyword arguments (used for (de)serialization)
    ):
        """Constructor for the DescriptorNumber class

        param name: Name of the descriptor
        param value: Value of the descriptor
        param unit: Unit of the descriptor
        param variance: Variance of the descriptor
        param description: Description of the descriptor
        param url: URL of the descriptor
        param display_name: Display name of the descriptor
        param parent: Parent of the descriptor
        .. note:: Undo/Redo functionality is implemented for the attributes `variance`, `error`, `unit` and `value`.
        """
        self._observers: List[DescriptorNumber] = []

        # Extract serializer_id if provided during deserialization
        if '__serializer_id' in kwargs:
            self.__serializer_id = kwargs.pop('__serializer_id')

        if not isinstance(value, numbers.Number) or isinstance(value, bool):
            raise TypeError(f'{value=} must be a number')
        if variance is not None:
            if not isinstance(variance, numbers.Number) or isinstance(variance, bool):
                raise TypeError(f'{variance=} must be a number or None')
            if variance < 0:
                raise ValueError(f'{variance=} must be positive')
            variance = float(variance)
        if not isinstance(unit, sc.Unit) and not isinstance(unit, str):
            raise TypeError(f'{unit=} must be a scipp unit or a string representing a valid scipp unit')
        try:
            self._scalar = sc.scalar(float(value), unit=unit, variance=variance)
        except Exception as message:
            raise UnitError(message)
        super().__init__(
            name=name,
            unique_name=unique_name,
            description=description,
            url=url,
            display_name=display_name,
            parent=parent,
        )

        # Call convert_unit during initialization to ensure that the unit has no numbers in it, and to ensure unit consistency.
        if self.unit is not None:
            self._convert_unit(self._base_unit())

    @classmethod
    def from_scipp(cls, name: str, full_value: Variable, **kwargs) -> DescriptorNumber:
        """
        Create a DescriptorNumber from a scipp constant.

        :param name: Name of the descriptor
        :param value: Value of the descriptor as a scipp scalar
        :param kwargs: Additional parameters for the descriptor
        :return: DescriptorNumber
        """
        if not isinstance(full_value, Variable):
            raise TypeError(f'{full_value=} must be a scipp scalar')
        if len(full_value.dims) != 0:
            raise TypeError(f'{full_value=} must be a scipp scalar')
        return cls(name=name, value=full_value.value, unit=full_value.unit, variance=full_value.variance, **kwargs)

    def _attach_observer(self, observer: DescriptorNumber) -> None:
        """Attach an observer to the descriptor."""
        self._observers.append(observer)
        if not hasattr(self, '_DescriptorNumber__serializer_id'):
            self.__serializer_id = str(uuid.uuid4())

    def _detach_observer(self, observer: DescriptorNumber) -> None:
        """Detach an observer from the descriptor."""
        self._observers.remove(observer)
        if not self._observers:
            del self.__serializer_id

    def _notify_observers(self) -> None:
        """Notify all observers of a change."""
        for observer in self._observers:
            observer._update()

    def _validate_dependencies(self, origin=None) -> None:
        """Ping all observers to check if any cyclic dependencies have been introduced.

        :param origin: Unique_name of the origin of this validation check. Used to avoid cyclic depenencies.
        """
        if origin == self.unique_name:
            raise RuntimeError(
                '\n Cyclic dependency detected!\n'
                + f'An update of {self.unique_name} leads to it updating itself.\n'
                + 'Please check your dependencies.'
            )
        if origin is None:
            origin = self.unique_name
        for observer in self._observers:
            observer._validate_dependencies(origin=origin)

    @property
    def full_value(self) -> Variable:
        """
        Get the value of self as a scipp scalar. This is should be usable for most cases.

        :return: Value of self with unit.
        """
        return self._scalar

    @full_value.setter
    def full_value(self, full_value: Variable) -> None:
        raise AttributeError(
            f'Full_value is read-only. Change the value and variance seperately. Or create a new {self.__class__.__name__}.'
        )

    @property
    def value(self) -> numbers.Number:
        """
        Get the value. This should be usable for most cases. The full value can be obtained from `obj.full_value`.

        :return: Value of self with unit.
        """
        return self._scalar.value

    @value.setter
    @notify_observers
    @property_stack
    def value(self, value: numbers.Number) -> None:
        """
        Set the value of self. This should be usable for most cases. The full value can be obtained from `obj.full_value`.

        :param value: New value of self
        """
        if not isinstance(value, numbers.Number) or isinstance(value, bool):
            raise TypeError(f'{value=} must be a number')
        self._scalar.value = float(value)

    @property
    def unit(self) -> str:
        """
        Get the unit.

        :return: Unit as a string.
        """
        return str(self._scalar.unit)

    @unit.setter
    def unit(self, unit_str: str) -> None:
        raise AttributeError(
            (
                f'Unit is read-only. Use convert_unit to change the unit between allowed types '
                f'or create a new {self.__class__.__name__} with the desired unit.'
            )
        )  # noqa: E501

    @property
    def variance(self) -> float:
        """
        Get the variance.

        :return: variance.
        """
        return self._scalar.variance

    @variance.setter
    @notify_observers
    @property_stack
    def variance(self, variance_float: float) -> None:
        """
        Set the variance.

        :param variance_float: Variance as a float
        """
        if variance_float is not None:
            if not isinstance(variance_float, numbers.Number):
                raise TypeError(f'{variance_float=} must be a number or None')
            if variance_float < 0:
                raise ValueError(f'{variance_float=} must be positive')
            variance_float = float(variance_float)
        self._scalar.variance = variance_float

    @property
    def error(self) -> float:
        """
        The standard deviation for the parameter.

        :return: Error associated with parameter
        """
        if self._scalar.variance is None:
            return None
        return float(np.sqrt(self._scalar.variance))

    @error.setter
    @notify_observers
    @property_stack
    def error(self, value: float) -> None:
        """
        Set the standard deviation for the parameter.

        :param value: New error value
        """
        if value is not None:
            if not isinstance(value, numbers.Number):
                raise TypeError(f'{value=} must be a number or None')
            if value < 0:
                raise ValueError(f'{value=} must be positive')
            value = float(value)
            self._scalar.variance = value**2
        else:
            self._scalar.variance = None

    # When we convert units internally, we dont want to notify observers as this can cause infinite recursion.
    # Therefore the convert_unit method is split into two methods, a private internal method and a public method.
    def _convert_unit(self, unit_str: str) -> None:
        """
        Convert the value from one unit system to another.

        :param unit_str: New unit in string form
        """
        if not isinstance(unit_str, str):
            raise TypeError(f'{unit_str=} must be a string representing a valid scipp unit')
        new_unit = sc.Unit(unit_str)

        # Save the current state for undo/redo
        old_scalar = self._scalar

        # Perform the unit conversion
        try:
            new_scalar = self._scalar.to(unit=new_unit)
        except Exception as e:
            raise UnitError(f'Failed to convert unit: {e}') from e

        # Define the setter function for the undo stack
        def set_scalar(obj, scalar):
            obj._scalar = scalar

        # Push to undo stack
        self._global_object.stack.push(
            PropertyStack(self, set_scalar, old_scalar, new_scalar, text=f'Convert unit to {unit_str}')
        )

        # Update the scalar
        self._scalar = new_scalar

    # When the user calls convert_unit, we want to notify observers of the change to propagate the change.
    @notify_observers
    def convert_unit(self, unit_str: str) -> None:
        """
        Convert the value from one unit system to another.

        :param unit_str: New unit in string form
        """
        self._convert_unit(unit_str)

    # Just to get return type right
    def __copy__(self) -> DescriptorNumber:
        return super().__copy__()

    def __repr__(self) -> str:
        """Return printable representation."""
        string = '<'
        string += self.__class__.__name__ + ' '
        string += f"'{self._name}': "
        if np.abs(self._scalar.value) > 1e4 or (np.abs(self._scalar.value) < 1e-4 and self._scalar.value != 0):
            # Use scientific notation for large or small values
            string += f'{self._scalar.value:.3e}'
            if self.variance:
                string += f' \u00b1 {self.error:.3e}'
        else:
            string += f'{self._scalar.value:.4f}'
            if self.variance:
                string += f' \u00b1 {self.error:.4f}'
        obj_unit = self._scalar.unit
        if obj_unit == 'dimensionless':
            obj_unit = ''
        else:
            obj_unit = f' {obj_unit}'
        string += obj_unit
        string += '>'
        return string
        # return f"<{class_name} '{obj_name}': {obj_value:0.04f}{obj_unit}>"

    def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
        raw_dict = super().as_dict(skip=skip)
        raw_dict['value'] = self._scalar.value
        raw_dict['unit'] = str(self._scalar.unit)
        raw_dict['variance'] = self._scalar.variance
        if hasattr(self, '_DescriptorNumber__serializer_id'):
            raw_dict['__serializer_id'] = self.__serializer_id
        return raw_dict

    def __add__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorNumber:
        if isinstance(other, numbers.Number):
            if self.unit != 'dimensionless':
                raise UnitError('Numbers can only be added to dimensionless values')
            new_value = self.full_value + other
        elif type(other) is DescriptorNumber:
            original_unit = other.unit
            try:
                other._convert_unit(self.unit)
            except UnitError:
                raise UnitError(f'Values with units {self.unit} and {other.unit} cannot be added') from None
            new_value = self.full_value + other.full_value
            other._convert_unit(original_unit)
        else:
            return NotImplemented
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def __radd__(self, other: numbers.Number) -> DescriptorNumber:
        if isinstance(other, numbers.Number):
            if self.unit != 'dimensionless':
                raise UnitError('Numbers can only be added to dimensionless values')
            new_value = other + self.full_value
        else:
            return NotImplemented
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def __sub__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorNumber:
        if isinstance(other, numbers.Number):
            if self.unit != 'dimensionless':
                raise UnitError('Numbers can only be subtracted from dimensionless values')
            new_value = self.full_value - other
        elif type(other) is DescriptorNumber:
            original_unit = other.unit
            try:
                other._convert_unit(self.unit)
            except UnitError:
                raise UnitError(f'Values with units {self.unit} and {other.unit} cannot be subtracted') from None
            new_value = self.full_value - other.full_value
            other._convert_unit(original_unit)
        else:
            return NotImplemented
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def __rsub__(self, other: numbers.Number) -> DescriptorNumber:
        if isinstance(other, numbers.Number):
            if self.unit != 'dimensionless':
                raise UnitError('Numbers can only be subtracted from dimensionless values')
            new_value = other - self.full_value
        else:
            return NotImplemented
        descriptor = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor.name = descriptor.unique_name
        return descriptor

    def __mul__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorNumber:
        if isinstance(other, numbers.Number):
            new_value = self.full_value * other
        elif type(other) is DescriptorNumber:
            new_value = self.full_value * other.full_value
        else:
            return NotImplemented
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor_number._convert_unit(descriptor_number._base_unit())
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def __rmul__(self, other: numbers.Number) -> DescriptorNumber:
        if isinstance(other, numbers.Number):
            new_value = other * self.full_value
        else:
            return NotImplemented
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def __truediv__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorNumber:
        if isinstance(other, numbers.Number):
            original_other = other
            if other == 0:
                raise ZeroDivisionError('Cannot divide by zero')
            new_value = self.full_value / other
        elif type(other) is DescriptorNumber:
            original_other = other.value
            if original_other == 0:
                raise ZeroDivisionError('Cannot divide by zero')
            new_value = self.full_value / other.full_value
            other.value = original_other
        else:
            return NotImplemented
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor_number._convert_unit(descriptor_number._base_unit())
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def __rtruediv__(self, other: numbers.Number) -> DescriptorNumber:
        if isinstance(other, numbers.Number):
            if self.value == 0:
                raise ZeroDivisionError('Cannot divide by zero')
            new_value = other / self.full_value
        else:
            return NotImplemented
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def __pow__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorNumber:
        if isinstance(other, numbers.Number):
            exponent = other
        elif type(other) is DescriptorNumber:
            if other.unit != 'dimensionless':
                raise UnitError('Exponents must be dimensionless')
            if other.variance is not None:
                raise ValueError('Exponents must not have variance')
            exponent = other.value
        else:
            return NotImplemented
        try:
            new_value = self.full_value**exponent
        except Exception as message:
            raise message from None
        if np.isnan(new_value.value):
            raise ValueError('The result of the exponentiation is not a number')
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def __rpow__(self, other: numbers.Number) -> numbers.Number:
        if isinstance(other, numbers.Number):
            if self.unit != 'dimensionless':
                raise UnitError('Exponents must be dimensionless')
            if self.variance is not None:
                raise ValueError('Exponents must not have variance')
            new_value = other**self.value
        else:
            return NotImplemented
        return new_value

    def __neg__(self) -> DescriptorNumber:
        new_value = -self.full_value
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def __abs__(self) -> DescriptorNumber:
        new_value = abs(self.full_value)
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def _base_unit(self) -> str:
        """
        Extract the base unit from the unit string by removing numeric components and scientific notation.
        """
        string = str(self._scalar.unit)
        for i, letter in enumerate(string):
            if letter == 'e':
                if string[i : i + 2] not in ['e+', 'e-']:
                    return string[i:]
            elif letter not in ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.', '+', '-']:
                return string[i:]
        return ''

_observers instance-attribute

_observers = []

__serializer_id instance-attribute

__serializer_id = pop('__serializer_id')

_scalar instance-attribute

_scalar = scalar(float(value), unit=unit, variance=variance)

full_value property writable

full_value

Get the value of self as a scipp scalar. This is should be usable for most cases.

:return: Value of self with unit.

value property writable

value

Get the value. This should be usable for most cases. The full value can be obtained from obj.full_value.

:return: Value of self with unit.

unit property writable

unit

Get the unit.

:return: Unit as a string.

variance property writable

variance

Get the variance.

:return: variance.

error property writable

error

The standard deviation for the parameter.

:return: Error associated with parameter

__init__

__init__(
    name,
    value,
    unit='',
    variance=None,
    unique_name=None,
    description=None,
    url=None,
    display_name=None,
    parent=None,
    **kwargs,
)

Constructor for the DescriptorNumber class

param name: Name of the descriptor param value: Value of the descriptor param unit: Unit of the descriptor param variance: Variance of the descriptor param description: Description of the descriptor param url: URL of the descriptor param display_name: Display name of the descriptor param parent: Parent of the descriptor .. note:: Undo/Redo functionality is implemented for the attributes variance, error, unit and value.

Source code in src/easyscience/variable/descriptor_number.py
def __init__(
    self,
    name: str,
    value: numbers.Number,
    unit: Optional[Union[str, sc.Unit]] = '',
    variance: Optional[numbers.Number] = None,
    unique_name: Optional[str] = None,
    description: Optional[str] = None,
    url: Optional[str] = None,
    display_name: Optional[str] = None,
    parent: Optional[Any] = None,
    **kwargs: Any,  # Additional keyword arguments (used for (de)serialization)
):
    """Constructor for the DescriptorNumber class

    param name: Name of the descriptor
    param value: Value of the descriptor
    param unit: Unit of the descriptor
    param variance: Variance of the descriptor
    param description: Description of the descriptor
    param url: URL of the descriptor
    param display_name: Display name of the descriptor
    param parent: Parent of the descriptor
    .. note:: Undo/Redo functionality is implemented for the attributes `variance`, `error`, `unit` and `value`.
    """
    self._observers: List[DescriptorNumber] = []

    # Extract serializer_id if provided during deserialization
    if '__serializer_id' in kwargs:
        self.__serializer_id = kwargs.pop('__serializer_id')

    if not isinstance(value, numbers.Number) or isinstance(value, bool):
        raise TypeError(f'{value=} must be a number')
    if variance is not None:
        if not isinstance(variance, numbers.Number) or isinstance(variance, bool):
            raise TypeError(f'{variance=} must be a number or None')
        if variance < 0:
            raise ValueError(f'{variance=} must be positive')
        variance = float(variance)
    if not isinstance(unit, sc.Unit) and not isinstance(unit, str):
        raise TypeError(f'{unit=} must be a scipp unit or a string representing a valid scipp unit')
    try:
        self._scalar = sc.scalar(float(value), unit=unit, variance=variance)
    except Exception as message:
        raise UnitError(message)
    super().__init__(
        name=name,
        unique_name=unique_name,
        description=description,
        url=url,
        display_name=display_name,
        parent=parent,
    )

    # Call convert_unit during initialization to ensure that the unit has no numbers in it, and to ensure unit consistency.
    if self.unit is not None:
        self._convert_unit(self._base_unit())

from_scipp classmethod

from_scipp(name, full_value, **kwargs)

Create a DescriptorNumber from a scipp constant.

:param name: Name of the descriptor :param value: Value of the descriptor as a scipp scalar :param kwargs: Additional parameters for the descriptor :return: DescriptorNumber

Source code in src/easyscience/variable/descriptor_number.py
@classmethod
def from_scipp(cls, name: str, full_value: Variable, **kwargs) -> DescriptorNumber:
    """
    Create a DescriptorNumber from a scipp constant.

    :param name: Name of the descriptor
    :param value: Value of the descriptor as a scipp scalar
    :param kwargs: Additional parameters for the descriptor
    :return: DescriptorNumber
    """
    if not isinstance(full_value, Variable):
        raise TypeError(f'{full_value=} must be a scipp scalar')
    if len(full_value.dims) != 0:
        raise TypeError(f'{full_value=} must be a scipp scalar')
    return cls(name=name, value=full_value.value, unit=full_value.unit, variance=full_value.variance, **kwargs)

_attach_observer

_attach_observer(observer)

Attach an observer to the descriptor.

Source code in src/easyscience/variable/descriptor_number.py
def _attach_observer(self, observer: DescriptorNumber) -> None:
    """Attach an observer to the descriptor."""
    self._observers.append(observer)
    if not hasattr(self, '_DescriptorNumber__serializer_id'):
        self.__serializer_id = str(uuid.uuid4())

_detach_observer

_detach_observer(observer)

Detach an observer from the descriptor.

Source code in src/easyscience/variable/descriptor_number.py
def _detach_observer(self, observer: DescriptorNumber) -> None:
    """Detach an observer from the descriptor."""
    self._observers.remove(observer)
    if not self._observers:
        del self.__serializer_id

_notify_observers

_notify_observers()

Notify all observers of a change.

Source code in src/easyscience/variable/descriptor_number.py
def _notify_observers(self) -> None:
    """Notify all observers of a change."""
    for observer in self._observers:
        observer._update()

_validate_dependencies

_validate_dependencies(origin=None)

Ping all observers to check if any cyclic dependencies have been introduced.

:param origin: Unique_name of the origin of this validation check. Used to avoid cyclic depenencies.

Source code in src/easyscience/variable/descriptor_number.py
def _validate_dependencies(self, origin=None) -> None:
    """Ping all observers to check if any cyclic dependencies have been introduced.

    :param origin: Unique_name of the origin of this validation check. Used to avoid cyclic depenencies.
    """
    if origin == self.unique_name:
        raise RuntimeError(
            '\n Cyclic dependency detected!\n'
            + f'An update of {self.unique_name} leads to it updating itself.\n'
            + 'Please check your dependencies.'
        )
    if origin is None:
        origin = self.unique_name
    for observer in self._observers:
        observer._validate_dependencies(origin=origin)

_convert_unit

_convert_unit(unit_str)

Convert the value from one unit system to another.

:param unit_str: New unit in string form

Source code in src/easyscience/variable/descriptor_number.py
def _convert_unit(self, unit_str: str) -> None:
    """
    Convert the value from one unit system to another.

    :param unit_str: New unit in string form
    """
    if not isinstance(unit_str, str):
        raise TypeError(f'{unit_str=} must be a string representing a valid scipp unit')
    new_unit = sc.Unit(unit_str)

    # Save the current state for undo/redo
    old_scalar = self._scalar

    # Perform the unit conversion
    try:
        new_scalar = self._scalar.to(unit=new_unit)
    except Exception as e:
        raise UnitError(f'Failed to convert unit: {e}') from e

    # Define the setter function for the undo stack
    def set_scalar(obj, scalar):
        obj._scalar = scalar

    # Push to undo stack
    self._global_object.stack.push(
        PropertyStack(self, set_scalar, old_scalar, new_scalar, text=f'Convert unit to {unit_str}')
    )

    # Update the scalar
    self._scalar = new_scalar

convert_unit

convert_unit(unit_str)

Convert the value from one unit system to another.

:param unit_str: New unit in string form

Source code in src/easyscience/variable/descriptor_number.py
@notify_observers
def convert_unit(self, unit_str: str) -> None:
    """
    Convert the value from one unit system to another.

    :param unit_str: New unit in string form
    """
    self._convert_unit(unit_str)

__copy__

__copy__()
Source code in src/easyscience/variable/descriptor_number.py
def __copy__(self) -> DescriptorNumber:
    return super().__copy__()

__repr__

__repr__()

Return printable representation.

Source code in src/easyscience/variable/descriptor_number.py
def __repr__(self) -> str:
    """Return printable representation."""
    string = '<'
    string += self.__class__.__name__ + ' '
    string += f"'{self._name}': "
    if np.abs(self._scalar.value) > 1e4 or (np.abs(self._scalar.value) < 1e-4 and self._scalar.value != 0):
        # Use scientific notation for large or small values
        string += f'{self._scalar.value:.3e}'
        if self.variance:
            string += f' \u00b1 {self.error:.3e}'
    else:
        string += f'{self._scalar.value:.4f}'
        if self.variance:
            string += f' \u00b1 {self.error:.4f}'
    obj_unit = self._scalar.unit
    if obj_unit == 'dimensionless':
        obj_unit = ''
    else:
        obj_unit = f' {obj_unit}'
    string += obj_unit
    string += '>'
    return string

as_dict

as_dict(skip=None)
Source code in src/easyscience/variable/descriptor_number.py
def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
    raw_dict = super().as_dict(skip=skip)
    raw_dict['value'] = self._scalar.value
    raw_dict['unit'] = str(self._scalar.unit)
    raw_dict['variance'] = self._scalar.variance
    if hasattr(self, '_DescriptorNumber__serializer_id'):
        raw_dict['__serializer_id'] = self.__serializer_id
    return raw_dict

__add__

__add__(other)
Source code in src/easyscience/variable/descriptor_number.py
def __add__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorNumber:
    if isinstance(other, numbers.Number):
        if self.unit != 'dimensionless':
            raise UnitError('Numbers can only be added to dimensionless values')
        new_value = self.full_value + other
    elif type(other) is DescriptorNumber:
        original_unit = other.unit
        try:
            other._convert_unit(self.unit)
        except UnitError:
            raise UnitError(f'Values with units {self.unit} and {other.unit} cannot be added') from None
        new_value = self.full_value + other.full_value
        other._convert_unit(original_unit)
    else:
        return NotImplemented
    descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

__radd__

__radd__(other)
Source code in src/easyscience/variable/descriptor_number.py
def __radd__(self, other: numbers.Number) -> DescriptorNumber:
    if isinstance(other, numbers.Number):
        if self.unit != 'dimensionless':
            raise UnitError('Numbers can only be added to dimensionless values')
        new_value = other + self.full_value
    else:
        return NotImplemented
    descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

__sub__

__sub__(other)
Source code in src/easyscience/variable/descriptor_number.py
def __sub__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorNumber:
    if isinstance(other, numbers.Number):
        if self.unit != 'dimensionless':
            raise UnitError('Numbers can only be subtracted from dimensionless values')
        new_value = self.full_value - other
    elif type(other) is DescriptorNumber:
        original_unit = other.unit
        try:
            other._convert_unit(self.unit)
        except UnitError:
            raise UnitError(f'Values with units {self.unit} and {other.unit} cannot be subtracted') from None
        new_value = self.full_value - other.full_value
        other._convert_unit(original_unit)
    else:
        return NotImplemented
    descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

__rsub__

__rsub__(other)
Source code in src/easyscience/variable/descriptor_number.py
def __rsub__(self, other: numbers.Number) -> DescriptorNumber:
    if isinstance(other, numbers.Number):
        if self.unit != 'dimensionless':
            raise UnitError('Numbers can only be subtracted from dimensionless values')
        new_value = other - self.full_value
    else:
        return NotImplemented
    descriptor = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor.name = descriptor.unique_name
    return descriptor

__mul__

__mul__(other)
Source code in src/easyscience/variable/descriptor_number.py
def __mul__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorNumber:
    if isinstance(other, numbers.Number):
        new_value = self.full_value * other
    elif type(other) is DescriptorNumber:
        new_value = self.full_value * other.full_value
    else:
        return NotImplemented
    descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor_number._convert_unit(descriptor_number._base_unit())
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

__rmul__

__rmul__(other)
Source code in src/easyscience/variable/descriptor_number.py
def __rmul__(self, other: numbers.Number) -> DescriptorNumber:
    if isinstance(other, numbers.Number):
        new_value = other * self.full_value
    else:
        return NotImplemented
    descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

__truediv__

__truediv__(other)
Source code in src/easyscience/variable/descriptor_number.py
def __truediv__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorNumber:
    if isinstance(other, numbers.Number):
        original_other = other
        if other == 0:
            raise ZeroDivisionError('Cannot divide by zero')
        new_value = self.full_value / other
    elif type(other) is DescriptorNumber:
        original_other = other.value
        if original_other == 0:
            raise ZeroDivisionError('Cannot divide by zero')
        new_value = self.full_value / other.full_value
        other.value = original_other
    else:
        return NotImplemented
    descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor_number._convert_unit(descriptor_number._base_unit())
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

__rtruediv__

__rtruediv__(other)
Source code in src/easyscience/variable/descriptor_number.py
def __rtruediv__(self, other: numbers.Number) -> DescriptorNumber:
    if isinstance(other, numbers.Number):
        if self.value == 0:
            raise ZeroDivisionError('Cannot divide by zero')
        new_value = other / self.full_value
    else:
        return NotImplemented
    descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

__pow__

__pow__(other)
Source code in src/easyscience/variable/descriptor_number.py
def __pow__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorNumber:
    if isinstance(other, numbers.Number):
        exponent = other
    elif type(other) is DescriptorNumber:
        if other.unit != 'dimensionless':
            raise UnitError('Exponents must be dimensionless')
        if other.variance is not None:
            raise ValueError('Exponents must not have variance')
        exponent = other.value
    else:
        return NotImplemented
    try:
        new_value = self.full_value**exponent
    except Exception as message:
        raise message from None
    if np.isnan(new_value.value):
        raise ValueError('The result of the exponentiation is not a number')
    descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

__rpow__

__rpow__(other)
Source code in src/easyscience/variable/descriptor_number.py
def __rpow__(self, other: numbers.Number) -> numbers.Number:
    if isinstance(other, numbers.Number):
        if self.unit != 'dimensionless':
            raise UnitError('Exponents must be dimensionless')
        if self.variance is not None:
            raise ValueError('Exponents must not have variance')
        new_value = other**self.value
    else:
        return NotImplemented
    return new_value

__neg__

__neg__()
Source code in src/easyscience/variable/descriptor_number.py
def __neg__(self) -> DescriptorNumber:
    new_value = -self.full_value
    descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

__abs__

__abs__()
Source code in src/easyscience/variable/descriptor_number.py
def __abs__(self) -> DescriptorNumber:
    new_value = abs(self.full_value)
    descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_value)
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

_base_unit

_base_unit()

Extract the base unit from the unit string by removing numeric components and scientific notation.

Source code in src/easyscience/variable/descriptor_number.py
def _base_unit(self) -> str:
    """
    Extract the base unit from the unit string by removing numeric components and scientific notation.
    """
    string = str(self._scalar.unit)
    for i, letter in enumerate(string):
        if letter == 'e':
            if string[i : i + 2] not in ['e+', 'e-']:
                return string[i:]
        elif letter not in ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.', '+', '-']:
            return string[i:]
    return ''

easyscience.variable.DescriptorArray

Bases: DescriptorBase

A Descriptor for Array values with units. The internal representation is a scipp array.

Source code in src/easyscience/variable/descriptor_array.py
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
class DescriptorArray(DescriptorBase):
    """
    A `Descriptor` for Array values with units. The internal representation is a scipp array.
    """

    def __init__(
        self,
        name: str,
        value: Union[list, np.ndarray],
        unit: Optional[Union[str, sc.Unit]] = '',
        variance: Optional[Union[list, np.ndarray]] = None,
        unique_name: Optional[str] = None,
        description: Optional[str] = None,
        url: Optional[str] = None,
        display_name: Optional[str] = None,
        parent: Optional[Any] = None,
        dimensions: Optional[list] = None,
    ):
        """Constructor for the DescriptorArray class

        param name: Name of the descriptor
        param value: List containing the values of the descriptor
        param unit: Unit of the descriptor
        param variance: Variances of the descriptor
        param description: Description of the descriptor
        param url: URL of the descriptor
        param display_name: Display name of the descriptor
        param parent: Parent of the descriptor
        param dimensions: List of dimensions to pass to scipp. Will be autogenerated if not supplied.
        .. note:: Undo/Redo functionality is implemented for the attributes `variance`, `error`, `unit` and `value`.
        """

        if not isinstance(value, (list, np.ndarray)):
            raise TypeError(f'{value=} must be a list or numpy array.')
        if isinstance(value, list):
            value = np.array(value)  # Convert to numpy array for consistent handling.
        value = np.astype(value, 'float')

        if variance is not None:
            if not isinstance(variance, (list, np.ndarray)):
                raise TypeError(f'{variance=} must be a list or numpy array if provided.')
            if isinstance(variance, list):
                variance = np.array(variance)  # Convert to numpy array for consistent handling.
            if variance.shape != value.shape:
                raise ValueError(f'{variance=} must have the same shape as {value=}.')
            if not np.all(variance >= 0):
                raise ValueError(f'{variance=} must only contain non-negative values.')
            variance = np.astype(variance, 'float')

        if not isinstance(unit, sc.Unit) and not isinstance(unit, str):
            raise TypeError(f'{unit=} must be a scipp unit or a string representing a valid scipp unit')

        if dimensions is None:
            # Autogenerate dimensions if not supplied
            dimensions = ['dim' + str(i) for i in range(len(value.shape))]
        if not len(dimensions) == len(value.shape):
            raise ValueError(f'Length of dimensions ({dimensions=}) does not match length of value {value=}.')
        self._dimensions = dimensions

        try:
            # Convert value and variance to floats
            # for optimization everything must be floats
            self._array = sc.array(dims=dimensions, values=value, unit=unit, variances=variance)
        except Exception as message:
            raise UnitError(message)
            # TODO: handle 1xn and nx1 arrays

        super().__init__(
            name=name,
            unique_name=unique_name,
            description=description,
            url=url,
            display_name=display_name,
            parent=parent,
        )

        # Call convert_unit during initialization to ensure that the unit has no numbers in it, and to ensure unit consistency.
        if self.unit is not None:
            self.convert_unit(self._base_unit())

    @classmethod
    def from_scipp(cls, name: str, full_value: Variable, **kwargs) -> DescriptorArray:
        """
        Create a DescriptorArray from a scipp array.

        :param name: Name of the descriptor
        :param full_value: Value of the descriptor as a scipp variable
        :param kwargs: Additional parameters for the descriptor
        :return: DescriptorArray
        """
        if not isinstance(full_value, Variable):
            raise TypeError(f'{full_value=} must be a scipp array')
        return cls(
            name=name,
            value=full_value.values,
            unit=full_value.unit,
            variance=full_value.variances,
            dimensions=full_value.dims,
            **kwargs,
        )

    @property
    def full_value(self) -> Variable:
        """
        Get the value of self as a scipp array. This should be usable for most cases.

        :return: Value of self with unit.
        """
        return self._array

    @full_value.setter
    def full_value(self, full_value: Variable) -> None:
        raise AttributeError(
            f'Full_value is read-only. Change the value and variance separately. Or create a new {self.__class__.__name__}.'
        )

    @property
    def value(self) -> numbers.Number:
        """
        Get the value without units. The Scipp array can be obtained from `obj.full_value`.

        :return: Value of self without unit.
        """
        return self._array.values

    @value.setter
    @property_stack
    def value(self, value: Union[list, np.ndarray]) -> None:
        """
        Set the value of self. Ensures the input is an array and matches the shape of the existing array.
        The full value can be obtained from `obj.full_value`.

        :param value: New value for the DescriptorArray, must be a list or numpy array.
        """
        if not isinstance(value, (list, np.ndarray)):
            raise TypeError(f'{value=} must be a list or numpy array.')
        if isinstance(value, list):
            value = np.array(value)  # Convert lists to numpy arrays for consistent handling.

        if value.shape != self._array.values.shape:
            raise ValueError(f'{value=} must have the same shape as the existing array values.')

        # Values must be floats for optimization
        self._array.values = value.astype('float')

    @property
    def dimensions(self) -> list:
        """
        Get the dimensions used for the underlying scipp array.

        :return: dimensions of self.
        """
        return self._dimensions

    @dimensions.setter
    def dimensions(self, dimensions: Union[list]) -> None:
        """
        Set the dimensions of self. Ensures that the input has a shape compatible with self.full_value.

        :param value: list of dimensions.
        """
        if not isinstance(dimensions, (list, np.ndarray)):
            raise TypeError(f'{dimensions=} must be a list or numpy array.')

        if len(dimensions) != len(self._dimensions):
            raise ValueError(f'{dimensions=} must have the same shape as the existing dims')

        self._dimensions = dimensions
        # Also rename the dims of the scipp array
        rename_dict = {old_dim: new_dim for (old_dim, new_dim) in zip(self.full_value.dims, dimensions)}
        renamed_array = self._array.rename_dims(rename_dict)
        self._array = renamed_array

    @property
    def unit(self) -> str:
        """
        Get the unit.

        :return: Unit as a string.
        """
        return str(self._array.unit)

    @unit.setter
    def unit(self, unit_str: str) -> None:
        raise AttributeError(
            (
                f'Unit is read-only. Use convert_unit to change the unit between allowed types '
                f'or create a new {self.__class__.__name__} with the desired unit.'
            )
        )  # noqa: E501

    @property
    def variance(self) -> np.ndarray:
        """
        Get the variance as a Numpy ndarray.

        :return: variance.
        """
        return self._array.variances

    @variance.setter
    @property_stack
    def variance(self, variance: Union[list, np.ndarray]) -> None:
        """
        Set the variance of self. Ensures the input is an array and matches the shape of the existing values.

        :param variance: New variance for the DescriptorArray, must be a list or numpy array.
        """
        if variance is not None:
            if not isinstance(variance, (list, np.ndarray)):
                raise TypeError(f'{variance=} must be a list or numpy array.')
            if isinstance(variance, list):
                variance = np.array(variance)  # Convert lists to numpy arrays for consistent handling.

            if variance.shape != self._array.shape:
                raise ValueError(f'{variance=} must have the same shape as the array values.')

            if not np.all(variance >= 0):
                raise ValueError(f'{variance=} must only contain non-negative values.')

        # Values must be floats for optimization
        self._array.variances = variance.astype('float')

    @property
    def error(self) -> Optional[np.ndarray]:
        """
        The standard deviations, calculated as the square root of variances.

        :return: A numpy array of standard deviations, or None if variances are not set.
        """
        if self._array.variances is None:
            return None
        return np.sqrt(self._array.variances)

    @error.setter
    @property_stack
    def error(self, error: Union[list, np.ndarray]) -> None:
        """
        Set the standard deviation for the parameter, which updates the variances.

        :param error: A list or numpy array of standard deviations.
        """
        if error is not None:
            if not isinstance(error, (list, np.ndarray)):
                raise TypeError(f'{error=} must be a list or numpy array.')
            if isinstance(error, list):
                error = np.array(error)  # Convert lists to numpy arrays for consistent handling.

            if error.shape != self._array.values.shape:
                raise ValueError(f'{error=} must have the same shape as the array values.')

            if not np.all(error >= 0):
                raise ValueError(f'{error=} must only contain non-negative values.')

            # Update variances as the square of the errors
            self._array.variances = error**2
        else:
            self._array.variances = None

    def convert_unit(self, unit_str: str) -> None:
        """
        Convert the value from one unit system to another.

        :param unit_str: New unit in string form
        """
        if not isinstance(unit_str, str):
            raise TypeError(f'{unit_str=} must be a string representing a valid scipp unit')
        new_unit = sc.Unit(unit_str)

        # Save the current state for undo/redo
        old_array = self._array

        # Perform the unit conversion
        try:
            new_array = self._array.to(unit=new_unit)
        except Exception as e:
            raise UnitError(f'Failed to convert unit: {e}') from e

        # Define the setter function for the undo stack
        def set_array(obj, scalar):
            obj._array = scalar

        # Push to undo stack
        self._global_object.stack.push(
            PropertyStack(self, set_array, old_array, new_array, text=f'Convert unit to {unit_str}')
        )

        # Update the array
        self._array = new_array

    def __copy__(self) -> DescriptorArray:
        """
        Return a copy of the current DescriptorArray.
        """
        return super().__copy__()

    def __repr__(self) -> str:
        """
        Return a string representation of the DescriptorArray, showing its name, value, variance, and unit.
        Large arrays are summarized for brevity.
        """
        # Base string with name
        string = f"<{self.__class__.__name__} '{self._name}': "

        # Summarize array values
        values_summary = np.array2string(
            self._array.values,
            precision=4,
            threshold=10,  # Show full array if <=10 elements, else summarize
            edgeitems=3,  # Show first and last 3 elements for large arrays
        )
        string += f'values={values_summary}'

        # Add errors if they exists
        if self._array.variances is not None:
            errors_summary = np.array2string(
                self.error,
                precision=4,
                threshold=10,
                edgeitems=3,
            )
            string += f', errors={errors_summary}'

        # Add unit
        obj_unit = str(self._array.unit)
        if obj_unit and obj_unit != 'dimensionless':
            string += f', unit={obj_unit}'

        string += '>'
        string = string.replace('\n', ',')
        return string

    def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
        """
        Dict representation of the current DescriptorArray. The dict contains the value, unit and variances,
        in addition to the properties of DescriptorBase.
        """
        raw_dict = super().as_dict(skip=skip)
        raw_dict['value'] = self._array.values
        raw_dict['unit'] = str(self._array.unit)
        raw_dict['variance'] = self._array.variances
        raw_dict['dimensions'] = self._array.dims
        return raw_dict

    def _apply_operation(
        self,
        other: Union[DescriptorArray, DescriptorNumber, list, numbers.Number],
        operation: Callable,
        units_must_match: bool = True,
    ) -> DescriptorArray:
        """
        Perform element-wise operations with another DescriptorNumber, DescriptorArray, list, or number.

        :param other: The object to operate on. Must be a DescriptorArray or DescriptorNumber with compatible units,
                    or a list with the same shape if the DescriptorArray is dimensionless.
        :param operation: The operation to perform
        :return: A new DescriptorArray representing the result of the operation.
        """
        if isinstance(other, numbers.Number):
            # Does not need to be dimensionless for multiplication and division
            if self.unit not in [None, 'dimensionless'] and units_must_match:
                raise UnitError('Numbers can only be used together with dimensionless values')
            new_full_value = operation(self.full_value, other)

        elif isinstance(other, list):
            if self.unit not in [None, 'dimensionless'] and units_must_match:
                raise UnitError('Operations with lists are only allowed for dimensionless values')

            # Ensure dimensions match
            if np.shape(other) != self._array.values.shape:
                raise ValueError(f'Shape of {other=} must match the shape of DescriptorArray values')

            other = sc.array(dims=self._array.dims, values=other)
            new_full_value = operation(self._array, other)  # Let scipp handle operation for uncertainty propagation

        elif isinstance(other, DescriptorNumber):
            try:
                other_converted = other.__copy__()
                other_converted.convert_unit(self.unit)
            except UnitError:
                if units_must_match:
                    raise UnitError(f'Values with units {self.unit} and {other.unit} are not compatible') from None
            # Operations with a DescriptorNumber that has a variance WILL introduce
            # correlations between the elements of the DescriptorArray.
            # See, https://content.iospress.com/articles/journal-of-neutron-research/jnr220049
            # However, DescriptorArray does not consider the covariance between
            # elements of the array. Hence, the broadcasting is "manually"
            # performed to work around `scipp` and a warning raised to the end user.
            if self._array.variances is not None or other.variance is not None:
                warn(
                    'Correlations introduced by this operation will not be considered.\
                      See https://content.iospress.com/articles/journal-of-neutron-research/jnr220049\
                      for further details',
                    UserWarning,
                )
            # Cheeky copy() of broadcasted scipp array to force scipp to perform the broadcast here
            broadcasted = sc.broadcast(other_converted.full_value, dims=self._array.dims, shape=self._array.shape).copy()
            new_full_value = operation(self.full_value, broadcasted)

        elif isinstance(other, DescriptorArray):
            try:
                other_converted = other.__copy__()
                other_converted.convert_unit(self.unit)
            except UnitError:
                if units_must_match:
                    raise UnitError(f'Values with units {self.unit} and {other.unit} are incompatible') from None

            # Ensure dimensions match
            if self.full_value.dims != other_converted.full_value.dims:
                raise ValueError(
                    f'Dimensions of the DescriptorArrays do not match: '
                    f'{self.full_value.dims} vs {other_converted.full_value.dims}'
                )

            new_full_value = operation(self.full_value, other_converted.full_value)

        else:
            return NotImplemented

        descriptor_array = DescriptorArray.from_scipp(name=self.name, full_value=new_full_value)
        descriptor_array.name = descriptor_array.unique_name
        return descriptor_array

    def _rapply_operation(
        self,
        other: Union[DescriptorArray, DescriptorNumber, list, numbers.Number],
        operation: Callable,
        units_must_match: bool = True,
    ) -> DescriptorArray:
        """
        Handle reverse operations for DescriptorArrays, DescriptorNumbers, lists, and scalars.
        Ensures unit compatibility when `other` is a DescriptorNumber.
        """

        def reversed_operation(a, b):
            return operation(b, a)

        if isinstance(other, DescriptorNumber):
            # Ensure unit compatibility for DescriptorNumber
            original_unit = self.unit
            try:
                self.convert_unit(other.unit)  # Convert `self` to `other`'s unit
            except UnitError:
                # Only allowed operations with different units are
                # multiplication and division. We try to convert
                # the units for mul/div, but if the conversion
                # fails it's no big deal.
                if units_must_match:
                    raise UnitError(f'Values with units {self.unit} and {other.unit} are incompatible') from None
            result = self._apply_operation(other, reversed_operation, units_must_match)
            # Revert `self` to its original unit
            self.convert_unit(original_unit)
            return result
        else:
            # Delegate to operation to __self__ for other types (e.g., list, scalar)
            return self._apply_operation(other, reversed_operation, units_must_match)

    def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
        """
        DescriptorArray does not generally support Numpy array functions.
        For example, `np.argwhere(descriptorArray: DescriptorArray)` should fail.
        Modify this function if you want to add such functionality.
        """
        return NotImplemented

    def __array_function__(self, func, types, args, kwargs):
        """
        DescriptorArray does not generally support Numpy array functions.
        For example, `np.argwhere(descriptorArray: DescriptorArray)` should fail.
        Modify this function if you want to add such functionality.
        """
        return NotImplemented

    def __add__(self, other: Union[DescriptorArray, DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
        """
        Perform element-wise addition with another DescriptorNumber, DescriptorArray, list, or number.

        :param other: The object to add. Must be a DescriptorArray or DescriptorNumber with compatible units,
                    or a list with the same shape if the DescriptorArray is dimensionless, or a number.
        :return: A new DescriptorArray representing the result of the addition.
        """
        return self._apply_operation(other, operator.add)

    def __radd__(self, other: Union[DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
        """
        Handle reverse addition for DescriptorArrays, DescriptorNumbers, lists, and scalars.
        Ensures unit compatibility when `other` is a DescriptorNumber.
        """
        return self._rapply_operation(other, operator.add)

    def __sub__(self, other: Union[DescriptorArray, list, np.ndarray, numbers.Number]) -> DescriptorArray:
        """
        Perform element-wise subtraction with another DescriptorArray, list, or number.

        :param other: The object to subtract. Must be a DescriptorArray with compatible units,
                    or a list with the same shape if the DescriptorArray is dimensionless.
        :return: A new DescriptorArray representing the result of the subtraction.
        """
        if isinstance(other, (DescriptorArray, DescriptorNumber, list, numbers.Number)):
            # Leverage __neg__ and __add__ for subtraction
            if isinstance(other, list):
                # Use numpy to negate all elements of the list
                value = (-np.array(other)).tolist()
            else:
                value = -other
            return self.__add__(value)
        else:
            return NotImplemented

    def __rsub__(self, other: Union[DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
        """
        Perform element-wise subtraction with another DescriptorNumber, list, or number.

        :param other: The object to subtract. Must be a DescriptorArray with compatible units,
                    or a list with the same shape if the DescriptorArray is dimensionless.
        :return: A new DescriptorArray representing the result of the subtraction.
        """
        if isinstance(other, (DescriptorNumber, list, numbers.Number)):
            if isinstance(other, list):
                # Use numpy to negate all elements of the list
                value = (-np.array(other)).tolist()
            else:
                value = -other
            return -(self.__radd__(value))
        else:
            return NotImplemented

    def __mul__(self, other: Union[DescriptorArray, DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
        """
        Perform element-wise multiplication with another DescriptorNumber, DescriptorArray, list, or number.

        :param other: The object to multiply. Must be a DescriptorArray or DescriptorNumber with compatible units,
                    or a list with the same shape if the DescriptorArray is dimensionless.
        :return: A new DescriptorArray representing the result of the addition.
        """
        if not isinstance(other, (DescriptorArray, DescriptorNumber, list, numbers.Number)):
            return NotImplemented
        return self._apply_operation(other, operator.mul, units_must_match=False)

    def __rmul__(self, other: Union[DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
        """
        Handle reverse multiplication for DescriptorNumbers, lists, and scalars.
        Ensures unit compatibility when `other` is a DescriptorNumber.
        """
        if not isinstance(other, (DescriptorNumber, list, numbers.Number)):
            return NotImplemented
        return self._rapply_operation(other, operator.mul, units_must_match=False)

    def __truediv__(self, other: Union[DescriptorArray, DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
        """
        Perform element-wise division with another DescriptorNumber, DescriptorArray, list, or number.

        :param other: The object to use as a denominator. Must be a DescriptorArray or DescriptorNumber with compatible units,
                    or a list with the same shape if the DescriptorArray is dimensionless.
        :return: A new DescriptorArray representing the result of the addition.
        """
        if not isinstance(other, (DescriptorArray, DescriptorNumber, list, numbers.Number)):
            return NotImplemented

        if isinstance(other, numbers.Number):
            original_other = other
        elif isinstance(other, list):
            original_other = np.array(other)
        elif isinstance(other, (DescriptorArray, DescriptorNumber)):
            original_other = other.value

        if np.any(original_other == 0):
            raise ZeroDivisionError('Cannot divide by zero')
        return self._apply_operation(other, operator.truediv, units_must_match=False)

    def __rtruediv__(self, other: Union[DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
        """
        Handle reverse division for DescriptorNumbers, lists, and scalars.
        Ensures unit compatibility when `other` is a DescriptorNumber.
        """
        if not isinstance(other, (DescriptorNumber, list, numbers.Number)):
            return NotImplemented

        if np.any(self.full_value.values == 0):
            raise ZeroDivisionError('Cannot divide by zero')

        # First use __div__ to compute `self / other`
        # but first converting to the units of other
        inverse_result = self._rapply_operation(other, operator.truediv, units_must_match=False)
        return inverse_result

    def __pow__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorArray:
        """
        Perform element-wise exponentiation with another DescriptorNumber or number.

        :param other: The object to use as a denominator. Must be a number or DescriptorNumber with
                    no unit or variance.
        :return: A new DescriptorArray representing the result of the addition.
        """
        if not isinstance(other, (numbers.Number, DescriptorNumber)):
            return NotImplemented

        if isinstance(other, numbers.Number):
            exponent = other
        elif isinstance(other, DescriptorNumber):
            if other.unit != 'dimensionless':
                raise UnitError('Exponents must be dimensionless')
            if other.variance is not None:
                raise ValueError('Exponents must not have variance')
            exponent = other.value
        else:
            return NotImplemented
        try:
            new_value = self.full_value**exponent
        except Exception as message:
            raise message from None
        if np.any(np.isnan(new_value.values)):
            raise ValueError('The result of the exponentiation is not a number')
        descriptor_number = DescriptorArray.from_scipp(name=self.name, full_value=new_value)
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number

    def __rpow__(self, other: numbers.Number):
        """
        Defers reverse pow with a descriptor array, `a ** array`.
        Exponentiation with regards to an array does not make sense,
        and is not implemented.
        """
        raise ValueError('Raising a value to the power of an array does not make sense.')

    def __neg__(self) -> DescriptorArray:
        """
        Negate all values in the DescriptorArray.
        """
        new_value = -self.full_value
        descriptor_array = DescriptorArray.from_scipp(name=self.name, full_value=new_value)
        descriptor_array.name = descriptor_array.unique_name
        return descriptor_array

    def __abs__(self) -> DescriptorArray:
        """
        Replace all elements in the DescriptorArray with their
        absolute values. Note that this is different from the
        norm of the DescriptorArray.
        """
        new_value = abs(self.full_value)
        descriptor_array = DescriptorArray.from_scipp(name=self.name, full_value=new_value)
        descriptor_array.name = descriptor_array.unique_name
        return descriptor_array

    def __getitem__(self, a) -> DescriptorArray:
        """
        Slice using scipp syntax.
        Defer slicing to scipp.
        """
        descriptor = DescriptorArray.from_scipp(name=self.name, full_value=self.full_value.__getitem__(a))
        descriptor.name = descriptor.unique_name
        return descriptor

    def __delitem__(self, a):
        """
        Defer slicing to scipp.
        This should fail, since scipp does not support __delitem__.
        """
        return self.full_value.__delitem__(a)

    def __setitem__(self, a, b: Union[numbers.Number, list, DescriptorNumber, DescriptorArray]):
        """
        __setitem via slice is not allowed, since we currently do not give back a
        view to the DescriptorArray upon calling __getitem__.
        """
        raise AttributeError(
            f'{self.__class__.__name__} cannot be edited via slicing. Edit the underlying scipp\
                    array via the `full_value` property, or create a\
                    new {self.__class__.__name__}.'
        )

    def trace(
        self, dimension1: Optional[str] = None, dimension2: Optional[str] = None
    ) -> Union[DescriptorArray, DescriptorNumber]:
        """
        Computes the trace over the descriptor array. The submatrix defined `dimension1` and `dimension2` must be square.
        For a rank `k` tensor, the trace will run over the firs two dimensions, resulting in a rank `k-2` tensor.

        :param dimension1, dimension2: First and second dimension to perform trace over. Must be in `self.dimensions`.
            If not defined, the trace will be taken over the first two dimensions.
        """
        if (dimension1 is not None and dimension2 is None) or (dimension1 is None and dimension2 is not None):
            raise ValueError('Either both or none of `dimension1` and `dimension2` must be set.')

        if dimension1 is not None and dimension2 is not None:
            if dimension1 == dimension2:
                raise ValueError(f'`{dimension1=}` and `{dimension2=}` must be different.')

            axes = []
            for dim in (dimension1, dimension2):
                if dim not in self.dimensions:
                    raise ValueError(f'Dimension {dim=} does not exist in `self.dimensions`.')
                index = self.dimensions.index(dim)
                axes.append(index)
            remaining_dimensions = [dim for dim in self.dimensions if dim not in (dimension1, dimension2)]
        else:
            # Take the first two dimensions
            axes = (0, 1)
            # Pick out the remaining dims
            remaining_dimensions = self.dimensions[2:]

        trace_value = np.trace(self.value, axis1=axes[0], axis2=axes[1])
        trace_variance = np.trace(self.variance, axis1=axes[0], axis2=axes[1]) if self.variance is not None else None
        # The trace reduces a rank k tensor to a k-2.
        if remaining_dimensions == []:
            # No remaining dimensions; the trace is a scalar
            trace = sc.scalar(value=trace_value, unit=self.unit, variance=trace_variance)
            constructor = DescriptorNumber.from_scipp
        else:
            # Else, the result is some array
            trace = sc.array(dims=remaining_dimensions, values=trace_value, unit=self.unit, variances=trace_variance)
            constructor = DescriptorArray.from_scipp

        descriptor = constructor(name=self.name, full_value=trace)
        descriptor.name = descriptor.unique_name
        return descriptor

    def sum(self, dim: Optional[Union[str, list]] = None) -> Union[DescriptorArray, DescriptorNumber]:
        """
        Uses scipp to sum over the requested dims.
        :param dim: The dim(s) in the scipp array to sum over. If `None`, will sum over all dims.
        """
        new_full_value = self.full_value.sum(dim=dim)

        # If fully reduced the result will be a DescriptorNumber,
        # otherwise a DescriptorArray
        if dim is None:
            constructor = DescriptorNumber.from_scipp
        else:
            constructor = DescriptorArray.from_scipp

        descriptor = constructor(name=self.name, full_value=new_full_value)
        descriptor.name = descriptor.unique_name
        return descriptor

    # This is to be implemented at a later time
    # def __matmul__(self, other: [DescriptorArray, list]) -> DescriptorArray:
    #     """
    #     Perform matrix multiplication with with another DesciptorArray or list.

    #     :param other: The object to use as a denominator. Must be a DescriptorArray
    #                 or a list, of compatible shape.
    #     :return: A new DescriptorArray representing the result of the addition.
    #     """
    #     if not isinstance(other, (DescriptorArray, list)):
    #         return NotImplemented

    #     if isinstance(other, DescriptorArray):
    #         shape = other.full_value.shape
    #     elif isinstance(other, list):
    #         shape = np.shape(other)

    #     # Dimensions must match for matrix multiplication
    #     if shape[0] != self._array.values.shape[-1]:
    #         raise ValueError(f"Last dimension of {other=} must match the first dimension of DescriptorArray values")
    #
    #     other = sc.array(dims=self._array.dims, values=other)
    #     new_full_value = operation(self._array, other)  # Let scipp handle operation for uncertainty propagation

    def _base_unit(self) -> str:
        """
        Returns the base unit of the current array.
        For example, if the unit is `100m`, returns `m`.
        """
        string = str(self._array.unit)
        for i, letter in enumerate(string):
            if letter == 'e':
                if string[i : i + 2] not in ['e+', 'e-']:
                    return string[i:]
            elif letter not in ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.', '+', '-']:
                return string[i:]
        return ''

_dimensions instance-attribute

_dimensions = dimensions

_array instance-attribute

_array = array(
    dims=dimensions,
    values=value,
    unit=unit,
    variances=variance,
)

full_value property writable

full_value

Get the value of self as a scipp array. This should be usable for most cases.

:return: Value of self with unit.

value property writable

value

Get the value without units. The Scipp array can be obtained from obj.full_value.

:return: Value of self without unit.

dimensions property writable

dimensions

Get the dimensions used for the underlying scipp array.

:return: dimensions of self.

unit property writable

unit

Get the unit.

:return: Unit as a string.

variance property writable

variance

Get the variance as a Numpy ndarray.

:return: variance.

error property writable

error

The standard deviations, calculated as the square root of variances.

:return: A numpy array of standard deviations, or None if variances are not set.

__init__

__init__(
    name,
    value,
    unit='',
    variance=None,
    unique_name=None,
    description=None,
    url=None,
    display_name=None,
    parent=None,
    dimensions=None,
)

Constructor for the DescriptorArray class

param name: Name of the descriptor param value: List containing the values of the descriptor param unit: Unit of the descriptor param variance: Variances of the descriptor param description: Description of the descriptor param url: URL of the descriptor param display_name: Display name of the descriptor param parent: Parent of the descriptor param dimensions: List of dimensions to pass to scipp. Will be autogenerated if not supplied. .. note:: Undo/Redo functionality is implemented for the attributes variance, error, unit and value.

Source code in src/easyscience/variable/descriptor_array.py
def __init__(
    self,
    name: str,
    value: Union[list, np.ndarray],
    unit: Optional[Union[str, sc.Unit]] = '',
    variance: Optional[Union[list, np.ndarray]] = None,
    unique_name: Optional[str] = None,
    description: Optional[str] = None,
    url: Optional[str] = None,
    display_name: Optional[str] = None,
    parent: Optional[Any] = None,
    dimensions: Optional[list] = None,
):
    """Constructor for the DescriptorArray class

    param name: Name of the descriptor
    param value: List containing the values of the descriptor
    param unit: Unit of the descriptor
    param variance: Variances of the descriptor
    param description: Description of the descriptor
    param url: URL of the descriptor
    param display_name: Display name of the descriptor
    param parent: Parent of the descriptor
    param dimensions: List of dimensions to pass to scipp. Will be autogenerated if not supplied.
    .. note:: Undo/Redo functionality is implemented for the attributes `variance`, `error`, `unit` and `value`.
    """

    if not isinstance(value, (list, np.ndarray)):
        raise TypeError(f'{value=} must be a list or numpy array.')
    if isinstance(value, list):
        value = np.array(value)  # Convert to numpy array for consistent handling.
    value = np.astype(value, 'float')

    if variance is not None:
        if not isinstance(variance, (list, np.ndarray)):
            raise TypeError(f'{variance=} must be a list or numpy array if provided.')
        if isinstance(variance, list):
            variance = np.array(variance)  # Convert to numpy array for consistent handling.
        if variance.shape != value.shape:
            raise ValueError(f'{variance=} must have the same shape as {value=}.')
        if not np.all(variance >= 0):
            raise ValueError(f'{variance=} must only contain non-negative values.')
        variance = np.astype(variance, 'float')

    if not isinstance(unit, sc.Unit) and not isinstance(unit, str):
        raise TypeError(f'{unit=} must be a scipp unit or a string representing a valid scipp unit')

    if dimensions is None:
        # Autogenerate dimensions if not supplied
        dimensions = ['dim' + str(i) for i in range(len(value.shape))]
    if not len(dimensions) == len(value.shape):
        raise ValueError(f'Length of dimensions ({dimensions=}) does not match length of value {value=}.')
    self._dimensions = dimensions

    try:
        # Convert value and variance to floats
        # for optimization everything must be floats
        self._array = sc.array(dims=dimensions, values=value, unit=unit, variances=variance)
    except Exception as message:
        raise UnitError(message)
        # TODO: handle 1xn and nx1 arrays

    super().__init__(
        name=name,
        unique_name=unique_name,
        description=description,
        url=url,
        display_name=display_name,
        parent=parent,
    )

    # Call convert_unit during initialization to ensure that the unit has no numbers in it, and to ensure unit consistency.
    if self.unit is not None:
        self.convert_unit(self._base_unit())

from_scipp classmethod

from_scipp(name, full_value, **kwargs)

Create a DescriptorArray from a scipp array.

:param name: Name of the descriptor :param full_value: Value of the descriptor as a scipp variable :param kwargs: Additional parameters for the descriptor :return: DescriptorArray

Source code in src/easyscience/variable/descriptor_array.py
@classmethod
def from_scipp(cls, name: str, full_value: Variable, **kwargs) -> DescriptorArray:
    """
    Create a DescriptorArray from a scipp array.

    :param name: Name of the descriptor
    :param full_value: Value of the descriptor as a scipp variable
    :param kwargs: Additional parameters for the descriptor
    :return: DescriptorArray
    """
    if not isinstance(full_value, Variable):
        raise TypeError(f'{full_value=} must be a scipp array')
    return cls(
        name=name,
        value=full_value.values,
        unit=full_value.unit,
        variance=full_value.variances,
        dimensions=full_value.dims,
        **kwargs,
    )

convert_unit

convert_unit(unit_str)

Convert the value from one unit system to another.

:param unit_str: New unit in string form

Source code in src/easyscience/variable/descriptor_array.py
def convert_unit(self, unit_str: str) -> None:
    """
    Convert the value from one unit system to another.

    :param unit_str: New unit in string form
    """
    if not isinstance(unit_str, str):
        raise TypeError(f'{unit_str=} must be a string representing a valid scipp unit')
    new_unit = sc.Unit(unit_str)

    # Save the current state for undo/redo
    old_array = self._array

    # Perform the unit conversion
    try:
        new_array = self._array.to(unit=new_unit)
    except Exception as e:
        raise UnitError(f'Failed to convert unit: {e}') from e

    # Define the setter function for the undo stack
    def set_array(obj, scalar):
        obj._array = scalar

    # Push to undo stack
    self._global_object.stack.push(
        PropertyStack(self, set_array, old_array, new_array, text=f'Convert unit to {unit_str}')
    )

    # Update the array
    self._array = new_array

__copy__

__copy__()

Return a copy of the current DescriptorArray.

Source code in src/easyscience/variable/descriptor_array.py
def __copy__(self) -> DescriptorArray:
    """
    Return a copy of the current DescriptorArray.
    """
    return super().__copy__()

__repr__

__repr__()

Return a string representation of the DescriptorArray, showing its name, value, variance, and unit. Large arrays are summarized for brevity.

Source code in src/easyscience/variable/descriptor_array.py
def __repr__(self) -> str:
    """
    Return a string representation of the DescriptorArray, showing its name, value, variance, and unit.
    Large arrays are summarized for brevity.
    """
    # Base string with name
    string = f"<{self.__class__.__name__} '{self._name}': "

    # Summarize array values
    values_summary = np.array2string(
        self._array.values,
        precision=4,
        threshold=10,  # Show full array if <=10 elements, else summarize
        edgeitems=3,  # Show first and last 3 elements for large arrays
    )
    string += f'values={values_summary}'

    # Add errors if they exists
    if self._array.variances is not None:
        errors_summary = np.array2string(
            self.error,
            precision=4,
            threshold=10,
            edgeitems=3,
        )
        string += f', errors={errors_summary}'

    # Add unit
    obj_unit = str(self._array.unit)
    if obj_unit and obj_unit != 'dimensionless':
        string += f', unit={obj_unit}'

    string += '>'
    string = string.replace('\n', ',')
    return string

as_dict

as_dict(skip=None)

Dict representation of the current DescriptorArray. The dict contains the value, unit and variances, in addition to the properties of DescriptorBase.

Source code in src/easyscience/variable/descriptor_array.py
def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
    """
    Dict representation of the current DescriptorArray. The dict contains the value, unit and variances,
    in addition to the properties of DescriptorBase.
    """
    raw_dict = super().as_dict(skip=skip)
    raw_dict['value'] = self._array.values
    raw_dict['unit'] = str(self._array.unit)
    raw_dict['variance'] = self._array.variances
    raw_dict['dimensions'] = self._array.dims
    return raw_dict

_apply_operation

_apply_operation(other, operation, units_must_match=True)

Perform element-wise operations with another DescriptorNumber, DescriptorArray, list, or number.

:param other: The object to operate on. Must be a DescriptorArray or DescriptorNumber with compatible units, or a list with the same shape if the DescriptorArray is dimensionless. :param operation: The operation to perform :return: A new DescriptorArray representing the result of the operation.

Source code in src/easyscience/variable/descriptor_array.py
def _apply_operation(
    self,
    other: Union[DescriptorArray, DescriptorNumber, list, numbers.Number],
    operation: Callable,
    units_must_match: bool = True,
) -> DescriptorArray:
    """
    Perform element-wise operations with another DescriptorNumber, DescriptorArray, list, or number.

    :param other: The object to operate on. Must be a DescriptorArray or DescriptorNumber with compatible units,
                or a list with the same shape if the DescriptorArray is dimensionless.
    :param operation: The operation to perform
    :return: A new DescriptorArray representing the result of the operation.
    """
    if isinstance(other, numbers.Number):
        # Does not need to be dimensionless for multiplication and division
        if self.unit not in [None, 'dimensionless'] and units_must_match:
            raise UnitError('Numbers can only be used together with dimensionless values')
        new_full_value = operation(self.full_value, other)

    elif isinstance(other, list):
        if self.unit not in [None, 'dimensionless'] and units_must_match:
            raise UnitError('Operations with lists are only allowed for dimensionless values')

        # Ensure dimensions match
        if np.shape(other) != self._array.values.shape:
            raise ValueError(f'Shape of {other=} must match the shape of DescriptorArray values')

        other = sc.array(dims=self._array.dims, values=other)
        new_full_value = operation(self._array, other)  # Let scipp handle operation for uncertainty propagation

    elif isinstance(other, DescriptorNumber):
        try:
            other_converted = other.__copy__()
            other_converted.convert_unit(self.unit)
        except UnitError:
            if units_must_match:
                raise UnitError(f'Values with units {self.unit} and {other.unit} are not compatible') from None
        # Operations with a DescriptorNumber that has a variance WILL introduce
        # correlations between the elements of the DescriptorArray.
        # See, https://content.iospress.com/articles/journal-of-neutron-research/jnr220049
        # However, DescriptorArray does not consider the covariance between
        # elements of the array. Hence, the broadcasting is "manually"
        # performed to work around `scipp` and a warning raised to the end user.
        if self._array.variances is not None or other.variance is not None:
            warn(
                'Correlations introduced by this operation will not be considered.\
                  See https://content.iospress.com/articles/journal-of-neutron-research/jnr220049\
                  for further details',
                UserWarning,
            )
        # Cheeky copy() of broadcasted scipp array to force scipp to perform the broadcast here
        broadcasted = sc.broadcast(other_converted.full_value, dims=self._array.dims, shape=self._array.shape).copy()
        new_full_value = operation(self.full_value, broadcasted)

    elif isinstance(other, DescriptorArray):
        try:
            other_converted = other.__copy__()
            other_converted.convert_unit(self.unit)
        except UnitError:
            if units_must_match:
                raise UnitError(f'Values with units {self.unit} and {other.unit} are incompatible') from None

        # Ensure dimensions match
        if self.full_value.dims != other_converted.full_value.dims:
            raise ValueError(
                f'Dimensions of the DescriptorArrays do not match: '
                f'{self.full_value.dims} vs {other_converted.full_value.dims}'
            )

        new_full_value = operation(self.full_value, other_converted.full_value)

    else:
        return NotImplemented

    descriptor_array = DescriptorArray.from_scipp(name=self.name, full_value=new_full_value)
    descriptor_array.name = descriptor_array.unique_name
    return descriptor_array

_rapply_operation

_rapply_operation(other, operation, units_must_match=True)

Handle reverse operations for DescriptorArrays, DescriptorNumbers, lists, and scalars. Ensures unit compatibility when other is a DescriptorNumber.

Source code in src/easyscience/variable/descriptor_array.py
def _rapply_operation(
    self,
    other: Union[DescriptorArray, DescriptorNumber, list, numbers.Number],
    operation: Callable,
    units_must_match: bool = True,
) -> DescriptorArray:
    """
    Handle reverse operations for DescriptorArrays, DescriptorNumbers, lists, and scalars.
    Ensures unit compatibility when `other` is a DescriptorNumber.
    """

    def reversed_operation(a, b):
        return operation(b, a)

    if isinstance(other, DescriptorNumber):
        # Ensure unit compatibility for DescriptorNumber
        original_unit = self.unit
        try:
            self.convert_unit(other.unit)  # Convert `self` to `other`'s unit
        except UnitError:
            # Only allowed operations with different units are
            # multiplication and division. We try to convert
            # the units for mul/div, but if the conversion
            # fails it's no big deal.
            if units_must_match:
                raise UnitError(f'Values with units {self.unit} and {other.unit} are incompatible') from None
        result = self._apply_operation(other, reversed_operation, units_must_match)
        # Revert `self` to its original unit
        self.convert_unit(original_unit)
        return result
    else:
        # Delegate to operation to __self__ for other types (e.g., list, scalar)
        return self._apply_operation(other, reversed_operation, units_must_match)

__array_ufunc__

__array_ufunc__(ufunc, method, *inputs, **kwargs)

DescriptorArray does not generally support Numpy array functions. For example, np.argwhere(descriptorArray: DescriptorArray) should fail. Modify this function if you want to add such functionality.

Source code in src/easyscience/variable/descriptor_array.py
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
    """
    DescriptorArray does not generally support Numpy array functions.
    For example, `np.argwhere(descriptorArray: DescriptorArray)` should fail.
    Modify this function if you want to add such functionality.
    """
    return NotImplemented

__array_function__

__array_function__(func, types, args, kwargs)

DescriptorArray does not generally support Numpy array functions. For example, np.argwhere(descriptorArray: DescriptorArray) should fail. Modify this function if you want to add such functionality.

Source code in src/easyscience/variable/descriptor_array.py
def __array_function__(self, func, types, args, kwargs):
    """
    DescriptorArray does not generally support Numpy array functions.
    For example, `np.argwhere(descriptorArray: DescriptorArray)` should fail.
    Modify this function if you want to add such functionality.
    """
    return NotImplemented

__add__

__add__(other)

Perform element-wise addition with another DescriptorNumber, DescriptorArray, list, or number.

:param other: The object to add. Must be a DescriptorArray or DescriptorNumber with compatible units, or a list with the same shape if the DescriptorArray is dimensionless, or a number. :return: A new DescriptorArray representing the result of the addition.

Source code in src/easyscience/variable/descriptor_array.py
def __add__(self, other: Union[DescriptorArray, DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
    """
    Perform element-wise addition with another DescriptorNumber, DescriptorArray, list, or number.

    :param other: The object to add. Must be a DescriptorArray or DescriptorNumber with compatible units,
                or a list with the same shape if the DescriptorArray is dimensionless, or a number.
    :return: A new DescriptorArray representing the result of the addition.
    """
    return self._apply_operation(other, operator.add)

__radd__

__radd__(other)

Handle reverse addition for DescriptorArrays, DescriptorNumbers, lists, and scalars. Ensures unit compatibility when other is a DescriptorNumber.

Source code in src/easyscience/variable/descriptor_array.py
def __radd__(self, other: Union[DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
    """
    Handle reverse addition for DescriptorArrays, DescriptorNumbers, lists, and scalars.
    Ensures unit compatibility when `other` is a DescriptorNumber.
    """
    return self._rapply_operation(other, operator.add)

__sub__

__sub__(other)

Perform element-wise subtraction with another DescriptorArray, list, or number.

:param other: The object to subtract. Must be a DescriptorArray with compatible units, or a list with the same shape if the DescriptorArray is dimensionless. :return: A new DescriptorArray representing the result of the subtraction.

Source code in src/easyscience/variable/descriptor_array.py
def __sub__(self, other: Union[DescriptorArray, list, np.ndarray, numbers.Number]) -> DescriptorArray:
    """
    Perform element-wise subtraction with another DescriptorArray, list, or number.

    :param other: The object to subtract. Must be a DescriptorArray with compatible units,
                or a list with the same shape if the DescriptorArray is dimensionless.
    :return: A new DescriptorArray representing the result of the subtraction.
    """
    if isinstance(other, (DescriptorArray, DescriptorNumber, list, numbers.Number)):
        # Leverage __neg__ and __add__ for subtraction
        if isinstance(other, list):
            # Use numpy to negate all elements of the list
            value = (-np.array(other)).tolist()
        else:
            value = -other
        return self.__add__(value)
    else:
        return NotImplemented

__rsub__

__rsub__(other)

Perform element-wise subtraction with another DescriptorNumber, list, or number.

:param other: The object to subtract. Must be a DescriptorArray with compatible units, or a list with the same shape if the DescriptorArray is dimensionless. :return: A new DescriptorArray representing the result of the subtraction.

Source code in src/easyscience/variable/descriptor_array.py
def __rsub__(self, other: Union[DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
    """
    Perform element-wise subtraction with another DescriptorNumber, list, or number.

    :param other: The object to subtract. Must be a DescriptorArray with compatible units,
                or a list with the same shape if the DescriptorArray is dimensionless.
    :return: A new DescriptorArray representing the result of the subtraction.
    """
    if isinstance(other, (DescriptorNumber, list, numbers.Number)):
        if isinstance(other, list):
            # Use numpy to negate all elements of the list
            value = (-np.array(other)).tolist()
        else:
            value = -other
        return -(self.__radd__(value))
    else:
        return NotImplemented

__mul__

__mul__(other)

Perform element-wise multiplication with another DescriptorNumber, DescriptorArray, list, or number.

:param other: The object to multiply. Must be a DescriptorArray or DescriptorNumber with compatible units, or a list with the same shape if the DescriptorArray is dimensionless. :return: A new DescriptorArray representing the result of the addition.

Source code in src/easyscience/variable/descriptor_array.py
def __mul__(self, other: Union[DescriptorArray, DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
    """
    Perform element-wise multiplication with another DescriptorNumber, DescriptorArray, list, or number.

    :param other: The object to multiply. Must be a DescriptorArray or DescriptorNumber with compatible units,
                or a list with the same shape if the DescriptorArray is dimensionless.
    :return: A new DescriptorArray representing the result of the addition.
    """
    if not isinstance(other, (DescriptorArray, DescriptorNumber, list, numbers.Number)):
        return NotImplemented
    return self._apply_operation(other, operator.mul, units_must_match=False)

__rmul__

__rmul__(other)

Handle reverse multiplication for DescriptorNumbers, lists, and scalars. Ensures unit compatibility when other is a DescriptorNumber.

Source code in src/easyscience/variable/descriptor_array.py
def __rmul__(self, other: Union[DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
    """
    Handle reverse multiplication for DescriptorNumbers, lists, and scalars.
    Ensures unit compatibility when `other` is a DescriptorNumber.
    """
    if not isinstance(other, (DescriptorNumber, list, numbers.Number)):
        return NotImplemented
    return self._rapply_operation(other, operator.mul, units_must_match=False)

__truediv__

__truediv__(other)

Perform element-wise division with another DescriptorNumber, DescriptorArray, list, or number.

:param other: The object to use as a denominator. Must be a DescriptorArray or DescriptorNumber with compatible units, or a list with the same shape if the DescriptorArray is dimensionless. :return: A new DescriptorArray representing the result of the addition.

Source code in src/easyscience/variable/descriptor_array.py
def __truediv__(self, other: Union[DescriptorArray, DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
    """
    Perform element-wise division with another DescriptorNumber, DescriptorArray, list, or number.

    :param other: The object to use as a denominator. Must be a DescriptorArray or DescriptorNumber with compatible units,
                or a list with the same shape if the DescriptorArray is dimensionless.
    :return: A new DescriptorArray representing the result of the addition.
    """
    if not isinstance(other, (DescriptorArray, DescriptorNumber, list, numbers.Number)):
        return NotImplemented

    if isinstance(other, numbers.Number):
        original_other = other
    elif isinstance(other, list):
        original_other = np.array(other)
    elif isinstance(other, (DescriptorArray, DescriptorNumber)):
        original_other = other.value

    if np.any(original_other == 0):
        raise ZeroDivisionError('Cannot divide by zero')
    return self._apply_operation(other, operator.truediv, units_must_match=False)

__rtruediv__

__rtruediv__(other)

Handle reverse division for DescriptorNumbers, lists, and scalars. Ensures unit compatibility when other is a DescriptorNumber.

Source code in src/easyscience/variable/descriptor_array.py
def __rtruediv__(self, other: Union[DescriptorNumber, list, numbers.Number]) -> DescriptorArray:
    """
    Handle reverse division for DescriptorNumbers, lists, and scalars.
    Ensures unit compatibility when `other` is a DescriptorNumber.
    """
    if not isinstance(other, (DescriptorNumber, list, numbers.Number)):
        return NotImplemented

    if np.any(self.full_value.values == 0):
        raise ZeroDivisionError('Cannot divide by zero')

    # First use __div__ to compute `self / other`
    # but first converting to the units of other
    inverse_result = self._rapply_operation(other, operator.truediv, units_must_match=False)
    return inverse_result

__pow__

__pow__(other)

Perform element-wise exponentiation with another DescriptorNumber or number.

:param other: The object to use as a denominator. Must be a number or DescriptorNumber with no unit or variance. :return: A new DescriptorArray representing the result of the addition.

Source code in src/easyscience/variable/descriptor_array.py
def __pow__(self, other: Union[DescriptorNumber, numbers.Number]) -> DescriptorArray:
    """
    Perform element-wise exponentiation with another DescriptorNumber or number.

    :param other: The object to use as a denominator. Must be a number or DescriptorNumber with
                no unit or variance.
    :return: A new DescriptorArray representing the result of the addition.
    """
    if not isinstance(other, (numbers.Number, DescriptorNumber)):
        return NotImplemented

    if isinstance(other, numbers.Number):
        exponent = other
    elif isinstance(other, DescriptorNumber):
        if other.unit != 'dimensionless':
            raise UnitError('Exponents must be dimensionless')
        if other.variance is not None:
            raise ValueError('Exponents must not have variance')
        exponent = other.value
    else:
        return NotImplemented
    try:
        new_value = self.full_value**exponent
    except Exception as message:
        raise message from None
    if np.any(np.isnan(new_value.values)):
        raise ValueError('The result of the exponentiation is not a number')
    descriptor_number = DescriptorArray.from_scipp(name=self.name, full_value=new_value)
    descriptor_number.name = descriptor_number.unique_name
    return descriptor_number

__rpow__

__rpow__(other)

Defers reverse pow with a descriptor array, a ** array. Exponentiation with regards to an array does not make sense, and is not implemented.

Source code in src/easyscience/variable/descriptor_array.py
def __rpow__(self, other: numbers.Number):
    """
    Defers reverse pow with a descriptor array, `a ** array`.
    Exponentiation with regards to an array does not make sense,
    and is not implemented.
    """
    raise ValueError('Raising a value to the power of an array does not make sense.')

__neg__

__neg__()

Negate all values in the DescriptorArray.

Source code in src/easyscience/variable/descriptor_array.py
def __neg__(self) -> DescriptorArray:
    """
    Negate all values in the DescriptorArray.
    """
    new_value = -self.full_value
    descriptor_array = DescriptorArray.from_scipp(name=self.name, full_value=new_value)
    descriptor_array.name = descriptor_array.unique_name
    return descriptor_array

__abs__

__abs__()

Replace all elements in the DescriptorArray with their absolute values. Note that this is different from the norm of the DescriptorArray.

Source code in src/easyscience/variable/descriptor_array.py
def __abs__(self) -> DescriptorArray:
    """
    Replace all elements in the DescriptorArray with their
    absolute values. Note that this is different from the
    norm of the DescriptorArray.
    """
    new_value = abs(self.full_value)
    descriptor_array = DescriptorArray.from_scipp(name=self.name, full_value=new_value)
    descriptor_array.name = descriptor_array.unique_name
    return descriptor_array

__getitem__

__getitem__(a)

Slice using scipp syntax. Defer slicing to scipp.

Source code in src/easyscience/variable/descriptor_array.py
def __getitem__(self, a) -> DescriptorArray:
    """
    Slice using scipp syntax.
    Defer slicing to scipp.
    """
    descriptor = DescriptorArray.from_scipp(name=self.name, full_value=self.full_value.__getitem__(a))
    descriptor.name = descriptor.unique_name
    return descriptor

__delitem__

__delitem__(a)

Defer slicing to scipp. This should fail, since scipp does not support delitem.

Source code in src/easyscience/variable/descriptor_array.py
def __delitem__(self, a):
    """
    Defer slicing to scipp.
    This should fail, since scipp does not support __delitem__.
    """
    return self.full_value.__delitem__(a)

__setitem__

__setitem__(a, b)

setitem via slice is not allowed, since we currently do not give back a view to the DescriptorArray upon calling __getitem.

Source code in src/easyscience/variable/descriptor_array.py
def __setitem__(self, a, b: Union[numbers.Number, list, DescriptorNumber, DescriptorArray]):
    """
    __setitem via slice is not allowed, since we currently do not give back a
    view to the DescriptorArray upon calling __getitem__.
    """
    raise AttributeError(
        f'{self.__class__.__name__} cannot be edited via slicing. Edit the underlying scipp\
                array via the `full_value` property, or create a\
                new {self.__class__.__name__}.'
    )

trace

trace(dimension1=None, dimension2=None)

Computes the trace over the descriptor array. The submatrix defined dimension1 and dimension2 must be square. For a rank k tensor, the trace will run over the firs two dimensions, resulting in a rank k-2 tensor.

:param dimension1, dimension2: First and second dimension to perform trace over. Must be in self.dimensions. If not defined, the trace will be taken over the first two dimensions.

Source code in src/easyscience/variable/descriptor_array.py
def trace(
    self, dimension1: Optional[str] = None, dimension2: Optional[str] = None
) -> Union[DescriptorArray, DescriptorNumber]:
    """
    Computes the trace over the descriptor array. The submatrix defined `dimension1` and `dimension2` must be square.
    For a rank `k` tensor, the trace will run over the firs two dimensions, resulting in a rank `k-2` tensor.

    :param dimension1, dimension2: First and second dimension to perform trace over. Must be in `self.dimensions`.
        If not defined, the trace will be taken over the first two dimensions.
    """
    if (dimension1 is not None and dimension2 is None) or (dimension1 is None and dimension2 is not None):
        raise ValueError('Either both or none of `dimension1` and `dimension2` must be set.')

    if dimension1 is not None and dimension2 is not None:
        if dimension1 == dimension2:
            raise ValueError(f'`{dimension1=}` and `{dimension2=}` must be different.')

        axes = []
        for dim in (dimension1, dimension2):
            if dim not in self.dimensions:
                raise ValueError(f'Dimension {dim=} does not exist in `self.dimensions`.')
            index = self.dimensions.index(dim)
            axes.append(index)
        remaining_dimensions = [dim for dim in self.dimensions if dim not in (dimension1, dimension2)]
    else:
        # Take the first two dimensions
        axes = (0, 1)
        # Pick out the remaining dims
        remaining_dimensions = self.dimensions[2:]

    trace_value = np.trace(self.value, axis1=axes[0], axis2=axes[1])
    trace_variance = np.trace(self.variance, axis1=axes[0], axis2=axes[1]) if self.variance is not None else None
    # The trace reduces a rank k tensor to a k-2.
    if remaining_dimensions == []:
        # No remaining dimensions; the trace is a scalar
        trace = sc.scalar(value=trace_value, unit=self.unit, variance=trace_variance)
        constructor = DescriptorNumber.from_scipp
    else:
        # Else, the result is some array
        trace = sc.array(dims=remaining_dimensions, values=trace_value, unit=self.unit, variances=trace_variance)
        constructor = DescriptorArray.from_scipp

    descriptor = constructor(name=self.name, full_value=trace)
    descriptor.name = descriptor.unique_name
    return descriptor

sum

sum(dim=None)

Uses scipp to sum over the requested dims. :param dim: The dim(s) in the scipp array to sum over. If None, will sum over all dims.

Source code in src/easyscience/variable/descriptor_array.py
def sum(self, dim: Optional[Union[str, list]] = None) -> Union[DescriptorArray, DescriptorNumber]:
    """
    Uses scipp to sum over the requested dims.
    :param dim: The dim(s) in the scipp array to sum over. If `None`, will sum over all dims.
    """
    new_full_value = self.full_value.sum(dim=dim)

    # If fully reduced the result will be a DescriptorNumber,
    # otherwise a DescriptorArray
    if dim is None:
        constructor = DescriptorNumber.from_scipp
    else:
        constructor = DescriptorArray.from_scipp

    descriptor = constructor(name=self.name, full_value=new_full_value)
    descriptor.name = descriptor.unique_name
    return descriptor

_base_unit

_base_unit()

Returns the base unit of the current array. For example, if the unit is 100m, returns m.

Source code in src/easyscience/variable/descriptor_array.py
def _base_unit(self) -> str:
    """
    Returns the base unit of the current array.
    For example, if the unit is `100m`, returns `m`.
    """
    string = str(self._array.unit)
    for i, letter in enumerate(string):
        if letter == 'e':
            if string[i : i + 2] not in ['e+', 'e-']:
                return string[i:]
        elif letter not in ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.', '+', '-']:
            return string[i:]
    return ''

easyscience.variable.DescriptorStr

Bases: DescriptorBase

A Descriptor for string values.

Source code in src/easyscience/variable/descriptor_str.py
class DescriptorStr(DescriptorBase):
    """
    A `Descriptor` for string values.
    """

    def __init__(
        self,
        name: str,
        value: str,
        unique_name: Optional[str] = None,
        description: Optional[str] = None,
        url: Optional[str] = None,
        display_name: Optional[str] = None,
        parent: Optional[Any] = None,
    ):
        super().__init__(
            name=name,
            unique_name=unique_name,
            description=description,
            url=url,
            display_name=display_name,
            parent=parent,
        )
        if not isinstance(value, str):
            raise ValueError(f'{value=} must be type str')
        self._string = value

    @property
    def value(self) -> str:
        """
        Get the value of self.

        :return: Value of self with unit.
        """
        return self._string

    @value.setter
    @property_stack
    def value(self, value: str) -> None:
        """
        Set the value of self.

        :param value: New value of self
        :return: None
        """
        if not isinstance(value, str):
            raise ValueError(f'{value=} must be type str')
        self._string = value

    def __repr__(self) -> str:
        """Return printable representation."""
        class_name = self.__class__.__name__
        obj_name = self._name
        obj_value = self._string
        return f"<{class_name} '{obj_name}': {obj_value}>"

    # To get return type right
    def __copy__(self) -> DescriptorStr:
        return super().__copy__()

_string instance-attribute

_string = value

value property writable

value

Get the value of self.

:return: Value of self with unit.

__init__

__init__(
    name,
    value,
    unique_name=None,
    description=None,
    url=None,
    display_name=None,
    parent=None,
)
Source code in src/easyscience/variable/descriptor_str.py
def __init__(
    self,
    name: str,
    value: str,
    unique_name: Optional[str] = None,
    description: Optional[str] = None,
    url: Optional[str] = None,
    display_name: Optional[str] = None,
    parent: Optional[Any] = None,
):
    super().__init__(
        name=name,
        unique_name=unique_name,
        description=description,
        url=url,
        display_name=display_name,
        parent=parent,
    )
    if not isinstance(value, str):
        raise ValueError(f'{value=} must be type str')
    self._string = value

__repr__

__repr__()

Return printable representation.

Source code in src/easyscience/variable/descriptor_str.py
def __repr__(self) -> str:
    """Return printable representation."""
    class_name = self.__class__.__name__
    obj_name = self._name
    obj_value = self._string
    return f"<{class_name} '{obj_name}': {obj_value}>"

__copy__

__copy__()
Source code in src/easyscience/variable/descriptor_str.py
def __copy__(self) -> DescriptorStr:
    return super().__copy__()

easyscience.variable.DescriptorBool

Bases: DescriptorBase

A Descriptor for boolean values.

Source code in src/easyscience/variable/descriptor_bool.py
class DescriptorBool(DescriptorBase):
    """
    A `Descriptor` for boolean values.
    """

    def __init__(
        self,
        name: str,
        value: bool,
        unique_name: Optional[str] = None,
        description: Optional[str] = None,
        url: Optional[str] = None,
        display_name: Optional[str] = None,
        parent: Optional[Any] = None,
    ):
        if not isinstance(value, bool):
            raise ValueError(f'{value=} must be type bool')
        super().__init__(
            name=name,
            unique_name=unique_name,
            description=description,
            url=url,
            display_name=display_name,
            parent=parent,
        )
        if not isinstance(value, bool):
            raise TypeError(f'{value=} must be type bool')
        self._bool_value = value

    @property
    def value(self) -> bool:
        """
        Get the value of self.

        :return: Value of self
        """
        return self._bool_value

    @value.setter
    @property_stack
    def value(self, value: bool) -> None:
        """
        Set the value of self.

        :param value: New value of self
        :return: None
        """
        if not isinstance(value, bool):
            raise TypeError(f'{value=} must be type bool')
        self._bool_value = value

    def __repr__(self) -> str:
        """Return printable representation."""
        class_name = self.__class__.__name__
        obj_name = self._name
        obj_value = self._bool_value
        return f"<{class_name} '{obj_name}': {obj_value}>"

    # To get return type right
    def __copy__(self) -> DescriptorBool:
        return super().__copy__()

_bool_value instance-attribute

_bool_value = value

value property writable

value

Get the value of self.

:return: Value of self

__init__

__init__(
    name,
    value,
    unique_name=None,
    description=None,
    url=None,
    display_name=None,
    parent=None,
)
Source code in src/easyscience/variable/descriptor_bool.py
def __init__(
    self,
    name: str,
    value: bool,
    unique_name: Optional[str] = None,
    description: Optional[str] = None,
    url: Optional[str] = None,
    display_name: Optional[str] = None,
    parent: Optional[Any] = None,
):
    if not isinstance(value, bool):
        raise ValueError(f'{value=} must be type bool')
    super().__init__(
        name=name,
        unique_name=unique_name,
        description=description,
        url=url,
        display_name=display_name,
        parent=parent,
    )
    if not isinstance(value, bool):
        raise TypeError(f'{value=} must be type bool')
    self._bool_value = value

__repr__

__repr__()

Return printable representation.

Source code in src/easyscience/variable/descriptor_bool.py
def __repr__(self) -> str:
    """Return printable representation."""
    class_name = self.__class__.__name__
    obj_name = self._name
    obj_value = self._bool_value
    return f"<{class_name} '{obj_name}': {obj_value}>"

__copy__

__copy__()
Source code in src/easyscience/variable/descriptor_bool.py
def __copy__(self) -> DescriptorBool:
    return super().__copy__()

easyscience.variable.DescriptorAnyType

Bases: DescriptorBase

A Descriptor for any type that does not fit the other Descriptors. Should be avoided when possible. It was created to hold the symmetry operations used in the SpaceGroup class of EasyCrystallography.

Source code in src/easyscience/variable/descriptor_any_type.py
class DescriptorAnyType(DescriptorBase):
    """
    A `Descriptor` for any type that does not fit the other Descriptors. Should be avoided when possible.
    It was created to hold the symmetry operations used in the SpaceGroup class of EasyCrystallography.
    """

    def __init__(
        self,
        name: str,
        value: Any,
        unique_name: Optional[str] = None,
        description: Optional[str] = None,
        url: Optional[str] = None,
        display_name: Optional[str] = None,
        parent: Optional[Any] = None,
    ):
        """Constructor for the DescriptorAnyType class

        param name: Name of the descriptor
        param value: Value of the descriptor
        param description: Description of the descriptor
        param url: URL of the descriptor
        param display_name: Display name of the descriptor
        param parent: Parent of the descriptor
        .. note:: Undo/Redo functionality is implemented for the attributes `variance`, `error`, `unit` and `value`.
        """

        self._value = value

        super().__init__(
            name=name,
            unique_name=unique_name,
            description=description,
            url=url,
            display_name=display_name,
            parent=parent,
        )

    @property
    def value(self) -> numbers.Number:
        """
        Get the value.

        :return: Value of self.
        """
        return self._value

    @value.setter
    @property_stack
    def value(self, value: Union[list, np.ndarray]) -> None:
        """
        Set the value of self.

        :param value: New value for the DescriptorAnyType.
        """
        self._value = value

    def __copy__(self) -> DescriptorAnyType:
        return super().__copy__()

    def __repr__(self) -> str:
        """
        Return a string representation of the DescriptorAnyType, showing its name and value.
        """

        if hasattr(self._value, '__repr__'):
            value_repr = repr(self._value)
        else:
            value_repr = type(self._value)

        return f"<{self.__class__.__name__} '{self._name}': {value_repr}>"

    def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
        raw_dict = super().as_dict(skip=skip)
        raw_dict['value'] = self._value
        return raw_dict

_value instance-attribute

_value = value

value property writable

value

Get the value.

:return: Value of self.

__init__

__init__(
    name,
    value,
    unique_name=None,
    description=None,
    url=None,
    display_name=None,
    parent=None,
)

Constructor for the DescriptorAnyType class

param name: Name of the descriptor param value: Value of the descriptor param description: Description of the descriptor param url: URL of the descriptor param display_name: Display name of the descriptor param parent: Parent of the descriptor .. note:: Undo/Redo functionality is implemented for the attributes variance, error, unit and value.

Source code in src/easyscience/variable/descriptor_any_type.py
def __init__(
    self,
    name: str,
    value: Any,
    unique_name: Optional[str] = None,
    description: Optional[str] = None,
    url: Optional[str] = None,
    display_name: Optional[str] = None,
    parent: Optional[Any] = None,
):
    """Constructor for the DescriptorAnyType class

    param name: Name of the descriptor
    param value: Value of the descriptor
    param description: Description of the descriptor
    param url: URL of the descriptor
    param display_name: Display name of the descriptor
    param parent: Parent of the descriptor
    .. note:: Undo/Redo functionality is implemented for the attributes `variance`, `error`, `unit` and `value`.
    """

    self._value = value

    super().__init__(
        name=name,
        unique_name=unique_name,
        description=description,
        url=url,
        display_name=display_name,
        parent=parent,
    )

__copy__

__copy__()
Source code in src/easyscience/variable/descriptor_any_type.py
def __copy__(self) -> DescriptorAnyType:
    return super().__copy__()

__repr__

__repr__()

Return a string representation of the DescriptorAnyType, showing its name and value.

Source code in src/easyscience/variable/descriptor_any_type.py
def __repr__(self) -> str:
    """
    Return a string representation of the DescriptorAnyType, showing its name and value.
    """

    if hasattr(self._value, '__repr__'):
        value_repr = repr(self._value)
    else:
        value_repr = type(self._value)

    return f"<{self.__class__.__name__} '{self._name}': {value_repr}>"

as_dict

as_dict(skip=None)
Source code in src/easyscience/variable/descriptor_any_type.py
def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
    raw_dict = super().as_dict(skip=skip)
    raw_dict['value'] = self._value
    return raw_dict

Parameters

easyscience.variable.Parameter

Bases: DescriptorNumber

A Parameter is a DescriptorNumber which can be used in fitting. It has additional fields to facilitate this.

Source code in src/easyscience/variable/parameter.py
  31
  32
  33
  34
  35
  36
  37
  38
  39
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
class Parameter(DescriptorNumber):
    """
    A Parameter is a DescriptorNumber which can be used in fitting. It has additional fields to facilitate this.
    """

    # Used by serializer
    # We copy the parent's _REDIRECT and modify it to avoid altering the parent's class dict
    _REDIRECT = DescriptorNumber._REDIRECT.copy()
    _REDIRECT['callback'] = None

    def __init__(
        self,
        name: str,
        value: numbers.Number,
        unit: Optional[Union[str, sc.Unit]] = '',
        variance: Optional[numbers.Number] = 0.0,
        min: Optional[numbers.Number] = -np.inf,
        max: Optional[numbers.Number] = np.inf,
        fixed: Optional[bool] = False,
        unique_name: Optional[str] = None,
        description: Optional[str] = None,
        url: Optional[str] = None,
        display_name: Optional[str] = None,
        callback: property = property(),
        parent: Optional[Any] = None,
        **kwargs: Any,  # Additional keyword arguments (used for (de)serialization)
    ):
        """
        This class is an extension of a `DescriptorNumber`. Where the descriptor was for static
        objects, a `Parameter` is for dynamic objects. A parameter has the ability to be used in fitting and has
        additional fields to facilitate this.

        :param name: Name of this object
        :param value: Value of this object
        :param unit: This object can have a physical unit associated with it
        :param variance: The variance of the value
        :param min: The minimum value for fitting
        :param max: The maximum value for fitting
        :param fixed: If the parameter is free to vary during fitting
        :param description: A brief summary of what this object is
        :param url: Lookup url for documentation/information
        :param display_name: The name of the object as it should be displayed
        :param parent: The object which is the parent to this one

        Note:
            Undo/Redo functionality is implemented for the attributes `value`, `variance`, `error`, `min`, `max`, `bounds`, `fixed`, `unit`
        """  # noqa: E501
        # Extract and ignore serialization-specific fields from kwargs
        kwargs.pop('_dependency_string', None)
        kwargs.pop('_dependency_map_serializer_ids', None)
        kwargs.pop('_independent', None)

        if not isinstance(min, numbers.Number):
            raise TypeError('`min` must be a number')
        if not isinstance(max, numbers.Number):
            raise TypeError('`max` must be a number')
        if not isinstance(value, numbers.Number):
            raise TypeError('`value` must be a number')
        if value < min:
            raise ValueError(f'{value=} can not be less than {min=}')
        if value > max:
            raise ValueError(f'{value=} can not be greater than {max=}')
        if np.isclose(min, max, rtol=1e-9, atol=0.0):
            raise ValueError('The min and max bounds cannot be identical. Please use fixed=True instead to fix the value.')
        if not isinstance(fixed, bool):
            raise TypeError('`fixed` must be either True or False')
        self._independent = True
        self._fixed = fixed  # For fitting, but must be initialized before super().__init__
        self._min = sc.scalar(float(min), unit=unit)
        self._max = sc.scalar(float(max), unit=unit)

        super().__init__(
            name=name,
            value=value,
            unit=unit,
            variance=variance,
            unique_name=unique_name,
            description=description,
            url=url,
            display_name=display_name,
            parent=parent,
            **kwargs,  # Additional keyword arguments (used for (de)serialization)
        )

        self._callback = callback  # Callback is used by interface to link to model
        if self._callback.fdel is not None:
            weakref.finalize(self, self._callback.fdel)

        # Create additional fitting elements
        self._initial_scalar = copy.deepcopy(self._scalar)

    @classmethod
    def from_dependency(
        cls, name: str, dependency_expression: str, dependency_map: Optional[dict] = None, **kwargs
    ) -> Parameter:  # noqa: E501
        """
        Create a dependent Parameter directly from a dependency expression.

        :param name: The name of the parameter
        :param dependency_expression: The dependency expression to evaluate. This should be a string which can be evaluated by the ASTEval interpreter.
        :param dependency_map: A dictionary of dependency expression symbol name and dependency object pairs. This is inserted into the asteval interpreter to resolve dependencies.
        :param kwargs: Additional keyword arguments to pass to the Parameter constructor.
        :return: A new dependent Parameter object.
        """  # noqa: E501
        # Set default values for required parameters for the constructor, they get overwritten by the dependency anyways
        default_kwargs = {'value': 0.0, 'unit': '', 'variance': 0.0, 'min': -np.inf, 'max': np.inf}
        # Update with user-provided kwargs, to avoid errors.
        default_kwargs.update(kwargs)
        parameter = cls(name=name, **default_kwargs)
        parameter.make_dependent_on(dependency_expression=dependency_expression, dependency_map=dependency_map)
        return parameter

    def _update(self) -> None:
        """
        Update the parameter. This is called by the DescriptorNumbers/Parameters who have this Parameter as a dependency.
        """
        if not self._independent:
            # Update the value of the parameter using the dependency interpreter
            temporary_parameter = self._dependency_interpreter(self._clean_dependency_string)
            self._scalar.value = temporary_parameter.value
            self._scalar.unit = temporary_parameter.unit
            self._scalar.variance = temporary_parameter.variance
            self._min.value = (
                temporary_parameter.min if isinstance(temporary_parameter, Parameter) else temporary_parameter.value
            )  # noqa: E501
            self._max.value = (
                temporary_parameter.max if isinstance(temporary_parameter, Parameter) else temporary_parameter.value
            )  # noqa: E501
            self._min.unit = temporary_parameter.unit
            self._max.unit = temporary_parameter.unit
            self._notify_observers()
        else:
            warnings.warn('This parameter is not dependent. It cannot be updated.')

    def make_dependent_on(self, dependency_expression: str, dependency_map: Optional[dict] = None) -> None:
        """
        Make this parameter dependent on another parameter. This will overwrite the current value, unit, variance, min and max.

        How to use the dependency map:
        If a parameter c has a dependency expression of 'a + b', where a and b are parameters belonging to the model class,
        then the dependency map needs to have the form {'a': model.a, 'b': model.b}, where model is the model class.
        I.e. the values are the actual objects, whereas the keys are how they are represented in the dependency expression.

        The dependency map is not needed if the dependency expression uses the unique names of the parameters.
        Unique names in dependency expressions are defined by quotes, e.g. 'Parameter_0' or "Parameter_0" depending on
        the quotes used for the expression.

        :param dependency_expression:
            The dependency expression to evaluate. This should be a string which
            can be evaluated by a python interpreter.

        :param dependency_map:
            A dictionary of dependency expression symbol name and dependency object pairs.
            This is inserted into the asteval interpreter to resolve dependencies.

        """  # noqa: E501
        if not isinstance(dependency_expression, str):
            raise TypeError('`dependency_expression` must be a string representing a valid dependency expression.')
        if not (isinstance(dependency_map, dict) or dependency_map is None):
            raise TypeError(
                '`dependency_map` must be a dictionary of dependencies and their'
                'corresponding names in the dependecy expression.'
            )  # noqa: E501
        if isinstance(dependency_map, dict):
            for key, value in dependency_map.items():
                if not isinstance(key, str):
                    raise TypeError(
                        '`dependency_map` keys must be strings representing the names of'
                        'the dependencies in the dependency expression.'
                    )  # noqa: E501
                if not isinstance(value, DescriptorNumber):
                    raise TypeError(
                        f'`dependency_map` values must be DescriptorNumbers or Parameters. Got {type(value)} for {key}.'
                    )  # noqa: E501

        # If we're overwriting the dependency, store the old attributes
        # in case we need to revert back to the old dependency
        self._previous_independent = self._independent
        if not self._independent:
            self._previous_dependency = {
                '_dependency_string': self._dependency_string,
                '_dependency_map': self._dependency_map,
                '_dependency_interpreter': self._dependency_interpreter,
                '_clean_dependency_string': self._clean_dependency_string,
            }
            for dependency in self._dependency_map.values():
                dependency._detach_observer(self)

        self._independent = False
        self._dependency_string = dependency_expression
        self._dependency_map = dependency_map if dependency_map is not None else {}
        # List of allowed python constructs for the asteval interpreter
        asteval_config = {
            'import': False,
            'importfrom': False,
            'assert': False,
            'augassign': False,
            'delete': False,
            'if': True,
            'ifexp': True,
            'for': False,
            'formattedvalue': False,
            'functiondef': False,
            'print': False,
            'raise': False,
            'listcomp': False,
            'dictcomp': False,
            'setcomp': False,
            'try': False,
            'while': False,
            'with': False,
        }
        self._dependency_interpreter = Interpreter(config=asteval_config)

        # Process the dependency expression for unique names
        try:
            self._process_dependency_unique_names(self._dependency_string)
        except ValueError as error:
            self._revert_dependency(skip_detach=True)
            raise error

        for key, value in self._dependency_map.items():
            self._dependency_interpreter.symtable[key] = value
            self._dependency_interpreter.readonly_symbols.add(
                key
            )  # Dont allow overwriting of the dependencies in the dependency expression  # noqa: E501
            value._attach_observer(self)
        # Check the dependency expression for errors
        try:
            dependency_result = self._dependency_interpreter.eval(self._clean_dependency_string, raise_errors=True)
        except NameError as message:
            self._revert_dependency()
            raise NameError(
                '\nUnknown name encountered in dependecy expression:'
                + '\n'
                + '\n'.join(str(message).split('\n')[1:])
                + '\nPlease check your expression or add the name to the `dependency_map`'
            ) from None
        except Exception as message:
            self._revert_dependency()
            raise SyntaxError(
                '\nError encountered in dependecy expression:'
                + '\n'
                + '\n'.join(str(message).split('\n')[1:])
                + '\nPlease check your expression'
            ) from None
        if not isinstance(dependency_result, DescriptorNumber):
            error_string = self._dependency_string
            self._revert_dependency()
            raise TypeError(
                f'The dependency expression: "{error_string}" returned a {type(dependency_result)},'
                'it should return a Parameter or DescriptorNumber.'
            )  # noqa: E501
        # Check for cyclic dependencies
        try:
            self._validate_dependencies()
        except RuntimeError as error:
            self._revert_dependency()
            raise error
        # Update the parameter with the dependency result
        self._fixed = False
        self._update()

    def make_independent(self) -> None:
        """
        Make this parameter independent.
        This will remove the dependency expression, the dependency map and the dependency interpreter.

        :return: None
        """
        if not self._independent:
            for dependency in self._dependency_map.values():
                dependency._detach_observer(self)
            self._independent = True
            del self._dependency_map
            del self._dependency_interpreter
            del self._dependency_string
            del self._clean_dependency_string
        else:
            raise AttributeError('This parameter is already independent.')

    @property
    def independent(self) -> bool:
        """
        Is the parameter independent?

        :return: True = independent, False = dependent
        """
        return self._independent

    @independent.setter
    def independent(self, value: bool) -> None:
        raise AttributeError(
            'This property is read-only. Use `make_independent` and  `make_dependent_on` to change the state of the parameter.'
        )  # noqa: E501

    @property
    def dependency_expression(self) -> str:
        """
        Get the dependency expression of this parameter.

        :return: The dependency expression of this parameter.
        """
        if not self._independent:
            return self._dependency_string
        else:
            raise AttributeError('This parameter is independent. It has no dependency expression.')

    @dependency_expression.setter
    def dependency_expression(self, new_expression: str) -> None:
        raise AttributeError(
            'Dependency expression is read-only. Use `make_dependent_on` to change the dependency expression.'
        )  # noqa: E501

    @property
    def dependency_map(self) -> Dict[str, DescriptorNumber]:
        """
        Get the dependency map of this parameter.

        :return: The dependency map of this parameter.
        """
        if not self._independent:
            return self._dependency_map
        else:
            raise AttributeError('This parameter is independent. It has no dependency map.')

    @dependency_map.setter
    def dependency_map(self, new_map: Dict[str, DescriptorNumber]) -> None:
        raise AttributeError('Dependency map is read-only. Use `make_dependent_on` to change the dependency map.')

    @property
    def value_no_call_back(self) -> numbers.Number:
        """
        Get the currently hold value of self suppressing call back.

        :return: Value of self without unit.
        """
        return self._scalar.value

    @property
    def full_value(self) -> Variable:
        """
        Get the value of self as a scipp scalar. This is should be usable for most cases.
        If a scipp scalar is not acceptable then the raw value can be obtained through `obj.value`.

        :return: Value of self with unit and variance.
        """
        return self._scalar

    @full_value.setter
    def full_value(self, scalar: Variable) -> None:
        raise AttributeError(
            f'Full_value is read-only. Change the value and variance seperately. Or create a new {self.__class__.__name__}.'
        )  # noqa: E501

    @property
    def value(self) -> numbers.Number:
        """
        Get the value of self as a Number.

        :return: Value of self without unit.
        """
        if self._callback.fget is not None:
            existing_value = self._callback.fget()
            if existing_value != self._scalar.value:
                self._scalar.value = existing_value
        return self._scalar.value

    @value.setter
    @property_stack
    def value(self, value: numbers.Number) -> None:
        """
        Set the value of self. This only updates the value of the scipp scalar.

        :param value: New value of self
        """
        if self._independent:
            if not isinstance(value, numbers.Number):
                raise TypeError(f'{value=} must be a number')

            value = float(value)
            if value < self._min.value:
                value = self._min.value
            if value > self._max.value:
                value = self._max.value

            self._scalar.value = value

            if self._callback.fset is not None:
                self._callback.fset(self._scalar.value)

            # Notify observers of the change
            self._notify_observers()
        else:
            raise AttributeError('This is a dependent parameter, its value cannot be set directly.')

    @DescriptorNumber.variance.setter
    def variance(self, variance_float: float) -> None:
        """
        Set the variance.

        :param variance_float: Variance as a float
        """
        if self._independent:
            DescriptorNumber.variance.fset(self, variance_float)
        else:
            raise AttributeError('This is a dependent parameter, its variance cannot be set directly.')

    @DescriptorNumber.error.setter
    def error(self, value: float) -> None:
        """
        Set the standard deviation for the parameter.

        :param value: New error value
        """
        if self._independent:
            DescriptorNumber.error.fset(self, value)
        else:
            raise AttributeError('This is a dependent parameter, its error cannot be set directly.')

    def _convert_unit(self, unit_str: str) -> None:
        """
        Perform unit conversion. The value, max and min can change on unit change.

        :param new_unit: new unit
        :return: None
        """
        super()._convert_unit(unit_str=unit_str)
        new_unit = sc.Unit(unit_str)  # unit_str is tested in super method
        self._min = self._min.to(unit=new_unit)
        self._max = self._max.to(unit=new_unit)

    @notify_observers
    def convert_unit(self, unit_str: str) -> None:
        """
        Perform unit conversion. The value, max and min can change on unit change.

        :param new_unit: new unit
        :return: None
        """
        self._convert_unit(unit_str)

    @property
    def min(self) -> numbers.Number:
        """
        Get the minimum value for fitting.

        :return: minimum value
        """
        return self._min.value

    @min.setter
    @property_stack
    def min(self, min_value: numbers.Number) -> None:
        """
        Set the minimum value for fitting.
        - implements undo/redo functionality.

        :param min_value: new minimum value
        :return: None
        """
        if self._independent:
            if not isinstance(min_value, numbers.Number):
                raise TypeError('`min` must be a number')
            if np.isclose(min_value, self._max.value, rtol=1e-9, atol=0.0):
                raise ValueError('The min and max bounds cannot be identical. Please use fixed=True instead to fix the value.')
            if min_value <= self.value:
                self._min.value = min_value
            else:
                raise ValueError(f'The current value ({self.value}) is smaller than the desired min value ({min_value}).')
            self._notify_observers()
        else:
            raise AttributeError('This is a dependent parameter, its minimum value cannot be set directly.')

    @property
    def max(self) -> numbers.Number:
        """
        Get the maximum value for fitting.

        :return: maximum value
        """
        return self._max.value

    @max.setter
    @property_stack
    def max(self, max_value: numbers.Number) -> None:
        """
        Get the maximum value for fitting.
        - implements undo/redo functionality.

        :param max_value: new maximum value
        :return: None
        """
        if self._independent:
            if not isinstance(max_value, numbers.Number):
                raise TypeError('`max` must be a number')
            if np.isclose(max_value, self._min.value, rtol=1e-9, atol=0.0):
                raise ValueError('The min and max bounds cannot be identical. Please use fixed=True instead to fix the value.')
            if max_value >= self.value:
                self._max.value = max_value
            else:
                raise ValueError(f'The current value ({self.value}) is greater than the desired max value ({max_value}).')
            self._notify_observers()
        else:
            raise AttributeError('This is a dependent parameter, its maximum value cannot be set directly.')

    @property
    def fixed(self) -> bool:
        """
        Can the parameter vary while fitting?

        :return: True = fixed, False = can vary
        """
        return self._fixed

    @fixed.setter
    @property_stack
    def fixed(self, fixed: bool) -> None:
        """
        Change the parameter vary while fitting state.
        - implements undo/redo functionality.

        :param fixed: True = fixed, False = can vary
        """
        if not isinstance(fixed, bool):
            raise ValueError(f'{fixed=} must be a boolean. Got {type(fixed)}')
        if self._independent:
            self._fixed = fixed
        else:
            if self._global_object.stack.enabled:
                # Remove the recorded change from the stack
                global_object.stack.pop()
            raise AttributeError('This is a dependent parameter, dependent parameters cannot be fixed.')

    # Is this alias really needed?
    @property
    def free(self) -> bool:
        return not self.fixed

    @free.setter
    def free(self, value: bool) -> None:
        self.fixed = not value

    def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
        """Overwrite the as_dict method to handle dependency information."""
        raw_dict = super().as_dict(skip=skip)

        # Add dependency information for dependent parameters
        if not self._independent:
            # Save the dependency expression
            raw_dict['_dependency_string'] = self._clean_dependency_string

            # Mark that this parameter is dependent
            raw_dict['_independent'] = self._independent

            # Convert dependency_map to use serializer_ids
            raw_dict['_dependency_map_serializer_ids'] = {}
            for key, obj in self._dependency_map.items():
                raw_dict['_dependency_map_serializer_ids'][key] = obj._DescriptorNumber__serializer_id

        return raw_dict

    def _revert_dependency(self, skip_detach=False) -> None:
        """
        Revert the dependency to the old dependency. This is used when an error is raised during setting the dependency.
        """
        if self._previous_independent is True:
            self.make_independent()
        else:
            if not skip_detach:
                for dependency in self._dependency_map.values():
                    dependency._detach_observer(self)
            for key, value in self._previous_dependency.items():
                setattr(self, key, value)
            for dependency in self._dependency_map.values():
                dependency._attach_observer(self)
            del self._previous_dependency
        del self._previous_independent

    def _process_dependency_unique_names(self, dependency_expression: str):
        """
        Add the unique names of the parameters to the ASTEval interpreter. This is used to evaluate the dependency expression.

        :param dependency_expression: The dependency expression to be evaluated
        """
        # Get the unique_names from the expression string regardless of the quotes used
        inputted_unique_names = re.findall("('.+?')", dependency_expression)
        inputted_unique_names += re.findall('(".+?")', dependency_expression)

        clean_dependency_string = dependency_expression
        existing_unique_names = self._global_object.map.vertices()
        # Add the unique names of the parameters to the ASTEVAL interpreter
        for name in inputted_unique_names:
            stripped_name = name.strip('\'"')
            if stripped_name not in existing_unique_names:
                raise ValueError(
                    f'A Parameter with unique_name {stripped_name} does not exist. Please check your dependency expression.'
                )  # noqa: E501
            dependent_parameter = self._global_object.map.get_item_by_key(stripped_name)
            if isinstance(dependent_parameter, DescriptorNumber):
                self._dependency_map['__' + stripped_name + '__'] = dependent_parameter
                clean_dependency_string = clean_dependency_string.replace(name, '__' + stripped_name + '__')
            else:
                raise ValueError(
                    f'The object with unique_name {stripped_name} is not a Parameter or DescriptorNumber. '
                    'Please check your dependency expression.'
                )  # noqa: E501
        self._clean_dependency_string = clean_dependency_string

    @classmethod
    def from_dict(cls, obj_dict: dict) -> 'Parameter':
        """
        Custom deserialization to handle parameter dependencies.
        Override the parent method to handle dependency information.
        """
        # Extract dependency information before creating the parameter
        raw_dict = obj_dict.copy()  # Don't modify the original dict
        dependency_string = raw_dict.pop('_dependency_string', None)
        dependency_map_serializer_ids = raw_dict.pop('_dependency_map_serializer_ids', None)
        is_independent = raw_dict.pop('_independent', True)
        # Note: Keep _serializer_id in the dict so it gets passed to __init__

        # Create the parameter using the base class method (serializer_id is now handled in __init__)
        param = super().from_dict(raw_dict)

        # Store dependency information for later resolution
        if not is_independent:
            param._pending_dependency_string = dependency_string
            param._pending_dependency_map_serializer_ids = dependency_map_serializer_ids
            # Keep parameter as independent initially - will be made dependent after all objects are loaded
            param._independent = True

        return param

    def __copy__(self) -> Parameter:
        new_obj = super().__copy__()
        new_obj._callback = property()
        return new_obj

    def __repr__(self) -> str:
        """
        Return printable representation of a Parameter object.
        """
        super_str = super().__repr__()
        super_str = super_str[:-1]
        s = []
        if self.fixed:
            super_str += ' (fixed)'
        s.append(super_str)
        s.append('bounds=[%s:%s]' % (repr(float(self.min)), repr(float(self.max))))
        return '%s>' % ', '.join(s)

    # Seems redundant
    # def __float__(self) -> float:
    #     return float(self._scalar.value)

    def __add__(self, other: Union[DescriptorNumber, Parameter, numbers.Number]) -> Parameter:
        if isinstance(other, numbers.Number):
            if self.unit != 'dimensionless':
                raise UnitError('Numbers can only be added to dimensionless values')
            new_full_value = self.full_value + other
            min_value = self.min + other
            max_value = self.max + other
        elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
            other_unit = other.unit
            try:
                other._convert_unit(self.unit)
            except UnitError:
                raise UnitError(f'Values with units {self.unit} and {other.unit} cannot be added') from None
            new_full_value = self.full_value + other.full_value
            min_value = self.min + other.min if isinstance(other, Parameter) else self.min + other.value
            max_value = self.max + other.max if isinstance(other, Parameter) else self.max + other.value
            other._convert_unit(other_unit)
        else:
            return NotImplemented
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter.name = parameter.unique_name
        return parameter

    def __radd__(self, other: Union[DescriptorNumber, numbers.Number]) -> Parameter:
        if isinstance(other, numbers.Number):
            if self.unit != 'dimensionless':
                raise UnitError('Numbers can only be added to dimensionless values')
            new_full_value = self.full_value + other
            min_value = self.min + other
            max_value = self.max + other
        elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
            original_unit = self.unit
            try:
                self._convert_unit(other.unit)
            except UnitError:
                raise UnitError(f'Values with units {other.unit} and {self.unit} cannot be added') from None
            new_full_value = self.full_value + other.full_value
            min_value = self.min + other.value
            max_value = self.max + other.value
            self._convert_unit(original_unit)
        else:
            return NotImplemented
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter.name = parameter.unique_name
        return parameter

    def __sub__(self, other: Union[DescriptorNumber, Parameter, numbers.Number]) -> Parameter:
        if isinstance(other, numbers.Number):
            if self.unit != 'dimensionless':
                raise UnitError('Numbers can only be subtracted from dimensionless values')
            new_full_value = self.full_value - other
            min_value = self.min - other
            max_value = self.max - other
        elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
            other_unit = other.unit
            try:
                other._convert_unit(self.unit)
            except UnitError:
                raise UnitError(f'Values with units {self.unit} and {other.unit} cannot be subtracted') from None
            new_full_value = self.full_value - other.full_value
            if isinstance(other, Parameter):
                min_value = self.min - other.max if other.max != np.inf else -np.inf
                max_value = self.max - other.min if other.min != -np.inf else np.inf
            else:
                min_value = self.min - other.value
                max_value = self.max - other.value
            other._convert_unit(other_unit)
        else:
            return NotImplemented
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter.name = parameter.unique_name
        return parameter

    def __rsub__(self, other: Union[DescriptorNumber, numbers.Number]) -> Parameter:
        if isinstance(other, numbers.Number):
            if self.unit != 'dimensionless':
                raise UnitError('Numbers can only be subtracted from dimensionless values')
            new_full_value = other - self.full_value
            min_value = other - self.max
            max_value = other - self.min
        elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
            original_unit = self.unit
            try:
                self._convert_unit(other.unit)
            except UnitError:
                raise UnitError(f'Values with units {other.unit} and {self.unit} cannot be subtracted') from None
            new_full_value = other.full_value - self.full_value
            min_value = other.value - self.max
            max_value = other.value - self.min
            self._convert_unit(original_unit)
        else:
            return NotImplemented
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter.name = parameter.unique_name
        return parameter

    def __mul__(self, other: Union[DescriptorNumber, Parameter, numbers.Number]) -> Parameter:
        if isinstance(other, numbers.Number):
            new_full_value = self.full_value * other
            if other == 0:
                descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
                descriptor_number.name = descriptor_number.unique_name
                return descriptor_number
            combinations = [self.min * other, self.max * other]
        elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
            new_full_value = self.full_value * other.full_value
            if (
                other.value == 0 and type(other) is DescriptorNumber
            ):  # Only return DescriptorNumber if other is strictly 0, i.e. not a parameter  # noqa: E501
                descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
                descriptor_number.name = descriptor_number.unique_name
                return descriptor_number
            if isinstance(other, Parameter):
                combinations = []
                for first, second in [
                    (self.min, other.min),
                    (self.min, other.max),
                    (self.max, other.min),
                    (self.max, other.max),
                ]:  # noqa: E501
                    if first == 0 and np.isinf(second):
                        combinations.append(0)
                    elif second == 0 and np.isinf(first):
                        combinations.append(0)
                    else:
                        combinations.append(first * second)
            else:
                combinations = [self.min * other.value, self.max * other.value]
        else:
            return NotImplemented
        min_value = min(combinations)
        max_value = max(combinations)
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter._convert_unit(parameter._base_unit())
        parameter.name = parameter.unique_name
        return parameter

    def __rmul__(self, other: Union[DescriptorNumber, numbers.Number]) -> Parameter:
        if isinstance(other, numbers.Number):
            new_full_value = other * self.full_value
            if other == 0:
                descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
                descriptor_number.name = descriptor_number.unique_name
                return descriptor_number
            combinations = [other * self.min, other * self.max]
        elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
            new_full_value = other.full_value * self.full_value
            if other.value == 0:
                descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
                descriptor_number.name = descriptor_number.unique_name
                return descriptor_number
            combinations = [self.min * other.value, self.max * other.value]
        else:
            return NotImplemented
        min_value = min(combinations)
        max_value = max(combinations)
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter._convert_unit(parameter._base_unit())
        parameter.name = parameter.unique_name
        return parameter

    def __truediv__(self, other: Union[DescriptorNumber, Parameter, numbers.Number]) -> Parameter:
        if isinstance(other, numbers.Number):
            if other == 0:
                raise ZeroDivisionError('Cannot divide by zero')
            new_full_value = self.full_value / other
            combinations = [self.min / other, self.max / other]
        elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
            other_value = other.value
            if other_value == 0:
                raise ZeroDivisionError('Cannot divide by zero')
            new_full_value = self.full_value / other.full_value
            if isinstance(other, Parameter):
                if other.min < 0 and other.max > 0:
                    combinations = [-np.inf, np.inf]
                elif other.min == 0:
                    if self.min < 0 and self.max > 0:
                        combinations = [-np.inf, np.inf]
                    elif self.min >= 0:
                        combinations = [self.min / other.max, np.inf]
                    elif self.max <= 0:
                        combinations = [-np.inf, self.max / other.max]
                elif other.max == 0:
                    if self.min < 0 and self.max > 0:
                        combinations = [-np.inf, np.inf]
                    elif self.min >= 0:
                        combinations = [-np.inf, self.min / other.min]
                    elif self.max <= 0:
                        combinations = [self.max / other.min, np.inf]
                else:
                    combinations = [self.min / other.min, self.max / other.max, self.min / other.max, self.max / other.min]
            else:
                combinations = [self.min / other.value, self.max / other.value]
            other.value = other_value
        else:
            return NotImplemented
        min_value = min(combinations)
        max_value = max(combinations)
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter._convert_unit(parameter._base_unit())
        parameter.name = parameter.unique_name
        return parameter

    def __rtruediv__(self, other: Union[DescriptorNumber, numbers.Number]) -> Parameter:
        original_self = self.value
        if original_self == 0:
            raise ZeroDivisionError('Cannot divide by zero')
        if isinstance(other, numbers.Number):
            new_full_value = other / self.full_value
            other_value = other
            if other_value == 0:
                descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
                descriptor_number.name = descriptor_number.unique_name
                return descriptor_number
        elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
            new_full_value = other.full_value / self.full_value
            other_value = other.value
            if other_value == 0:
                descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
                descriptor_number.name = descriptor_number.unique_name
                return descriptor_number
        else:
            return NotImplemented
        if self.min < 0 and self.max > 0:
            combinations = [-np.inf, np.inf]
        elif self.min == 0:
            if other_value > 0:
                combinations = [other_value / self.max, np.inf]
            elif other_value < 0:
                combinations = [-np.inf, other_value / self.max]
        elif self.max == 0:
            if other_value > 0:
                combinations = [-np.inf, other_value / self.min]
            elif other_value < 0:
                combinations = [other_value / self.min, np.inf]
        else:
            combinations = [other_value / self.min, other_value / self.max]
        min_value = min(combinations)
        max_value = max(combinations)
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter._convert_unit(parameter._base_unit())
        parameter.name = parameter.unique_name
        self.value = original_self
        return parameter

    def __pow__(self, other: Union[DescriptorNumber, numbers.Number]) -> Parameter:
        if isinstance(other, numbers.Number):
            exponent = other
        elif type(other) is DescriptorNumber:  # Strictly a DescriptorNumber, We can't raise to the power of a Parameter
            if other.unit != 'dimensionless':
                raise UnitError('Exponents must be dimensionless')
            if other.variance is not None:
                raise ValueError('Exponents must not have variance')
            exponent = other.value
        else:
            return NotImplemented

        try:
            new_full_value = self.full_value**exponent
        except Exception as message:
            raise message from None

        if np.isnan(new_full_value.value):
            raise ValueError('The result of the exponentiation is not a number')
        if exponent == 0:
            descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
            descriptor_number.name = descriptor_number.unique_name
            return descriptor_number
        elif exponent < 0:
            if self.min < 0 and self.max > 0:
                combinations = [-np.inf, np.inf]
            elif self.min == 0:
                combinations = [self.max**exponent, np.inf]
            elif self.max == 0:
                combinations = [-np.inf, self.min**exponent]
            else:
                combinations = [self.min**exponent, self.max**exponent]
        else:
            combinations = [self.min**exponent, self.max**exponent]
        if exponent % 2 == 0:
            if self.min < 0 and self.max > 0:
                combinations.append(0)
            combinations = [abs(combination) for combination in combinations]
        elif exponent % 1 != 0:
            if self.min < 0:
                combinations.append(0)
            combinations = [combination for combination in combinations if combination >= 0]
        min_value = min(combinations)
        max_value = max(combinations)
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter.name = parameter.unique_name
        return parameter

    def __neg__(self) -> Parameter:
        new_full_value = -self.full_value
        min_value = -self.max
        max_value = -self.min
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter.name = parameter.unique_name
        return parameter

    def __abs__(self) -> Parameter:
        new_full_value = abs(self.full_value)
        combinations = [abs(self.min), abs(self.max)]
        if self.min < 0 and self.max > 0:
            combinations.append(0.0)
        min_value = min(combinations)
        max_value = max(combinations)
        parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
        parameter.name = parameter.unique_name
        return parameter

    def resolve_pending_dependencies(self) -> None:
        """Resolve pending dependencies after deserialization.

        This method should be called after all parameters have been deserialized
        to establish dependency relationships using serializer_ids.
        """
        if hasattr(self, '_pending_dependency_string'):
            dependency_string = self._pending_dependency_string
            dependency_map = {}

            if hasattr(self, '_pending_dependency_map_serializer_ids'):
                dependency_map_serializer_ids = self._pending_dependency_map_serializer_ids

                # Build dependency_map by looking up objects by serializer_id
                for key, serializer_id in dependency_map_serializer_ids.items():
                    dep_obj = self._find_parameter_by_serializer_id(serializer_id)
                    if dep_obj is not None:
                        dependency_map[key] = dep_obj
                    else:
                        raise ValueError(f"Cannot find parameter with serializer_id '{serializer_id}'")

            # Establish the dependency relationship
            try:
                self.make_dependent_on(dependency_expression=dependency_string, dependency_map=dependency_map)
            except Exception as e:
                raise ValueError(f"Error establishing dependency '{dependency_string}': {e}")

            # Clean up temporary attributes
            delattr(self, '_pending_dependency_string')
            delattr(self, '_pending_dependency_map_serializer_ids')

    def _find_parameter_by_serializer_id(self, serializer_id: str) -> Optional['DescriptorNumber']:
        """Find a parameter by its serializer_id from all parameters in the global map."""
        for obj in self._global_object.map._store.values():
            if isinstance(obj, DescriptorNumber) and hasattr(obj, '_DescriptorNumber__serializer_id'):
                if obj._DescriptorNumber__serializer_id == serializer_id:
                    return obj
        return None

_REDIRECT class-attribute instance-attribute

_REDIRECT = copy()

_independent instance-attribute

_independent = True

_fixed instance-attribute

_fixed = fixed

_min instance-attribute

_min = scalar(float(min), unit=unit)

_max instance-attribute

_max = scalar(float(max), unit=unit)

_callback instance-attribute

_callback = callback

_initial_scalar instance-attribute

_initial_scalar = deepcopy(_scalar)

independent property writable

independent

Is the parameter independent?

:return: True = independent, False = dependent

dependency_expression property writable

dependency_expression

Get the dependency expression of this parameter.

:return: The dependency expression of this parameter.

dependency_map property writable

dependency_map

Get the dependency map of this parameter.

:return: The dependency map of this parameter.

value_no_call_back property

value_no_call_back

Get the currently hold value of self suppressing call back.

:return: Value of self without unit.

full_value property writable

full_value

Get the value of self as a scipp scalar. This is should be usable for most cases. If a scipp scalar is not acceptable then the raw value can be obtained through obj.value.

:return: Value of self with unit and variance.

value property writable

value

Get the value of self as a Number.

:return: Value of self without unit.

min property writable

min

Get the minimum value for fitting.

:return: minimum value

max property writable

max

Get the maximum value for fitting.

:return: maximum value

fixed property writable

fixed

Can the parameter vary while fitting?

:return: True = fixed, False = can vary

free property writable

free

__init__

__init__(
    name,
    value,
    unit='',
    variance=0.0,
    min=-np.inf,
    max=np.inf,
    fixed=False,
    unique_name=None,
    description=None,
    url=None,
    display_name=None,
    callback=property(),
    parent=None,
    **kwargs,
)

This class is an extension of a DescriptorNumber. Where the descriptor was for static objects, a Parameter is for dynamic objects. A parameter has the ability to be used in fitting and has additional fields to facilitate this.

:param name: Name of this object :param value: Value of this object :param unit: This object can have a physical unit associated with it :param variance: The variance of the value :param min: The minimum value for fitting :param max: The maximum value for fitting :param fixed: If the parameter is free to vary during fitting :param description: A brief summary of what this object is :param url: Lookup url for documentation/information :param display_name: The name of the object as it should be displayed :param parent: The object which is the parent to this one

Note: Undo/Redo functionality is implemented for the attributes value, variance, error, min, max, bounds, fixed, unit

Source code in src/easyscience/variable/parameter.py
def __init__(
    self,
    name: str,
    value: numbers.Number,
    unit: Optional[Union[str, sc.Unit]] = '',
    variance: Optional[numbers.Number] = 0.0,
    min: Optional[numbers.Number] = -np.inf,
    max: Optional[numbers.Number] = np.inf,
    fixed: Optional[bool] = False,
    unique_name: Optional[str] = None,
    description: Optional[str] = None,
    url: Optional[str] = None,
    display_name: Optional[str] = None,
    callback: property = property(),
    parent: Optional[Any] = None,
    **kwargs: Any,  # Additional keyword arguments (used for (de)serialization)
):
    """
    This class is an extension of a `DescriptorNumber`. Where the descriptor was for static
    objects, a `Parameter` is for dynamic objects. A parameter has the ability to be used in fitting and has
    additional fields to facilitate this.

    :param name: Name of this object
    :param value: Value of this object
    :param unit: This object can have a physical unit associated with it
    :param variance: The variance of the value
    :param min: The minimum value for fitting
    :param max: The maximum value for fitting
    :param fixed: If the parameter is free to vary during fitting
    :param description: A brief summary of what this object is
    :param url: Lookup url for documentation/information
    :param display_name: The name of the object as it should be displayed
    :param parent: The object which is the parent to this one

    Note:
        Undo/Redo functionality is implemented for the attributes `value`, `variance`, `error`, `min`, `max`, `bounds`, `fixed`, `unit`
    """  # noqa: E501
    # Extract and ignore serialization-specific fields from kwargs
    kwargs.pop('_dependency_string', None)
    kwargs.pop('_dependency_map_serializer_ids', None)
    kwargs.pop('_independent', None)

    if not isinstance(min, numbers.Number):
        raise TypeError('`min` must be a number')
    if not isinstance(max, numbers.Number):
        raise TypeError('`max` must be a number')
    if not isinstance(value, numbers.Number):
        raise TypeError('`value` must be a number')
    if value < min:
        raise ValueError(f'{value=} can not be less than {min=}')
    if value > max:
        raise ValueError(f'{value=} can not be greater than {max=}')
    if np.isclose(min, max, rtol=1e-9, atol=0.0):
        raise ValueError('The min and max bounds cannot be identical. Please use fixed=True instead to fix the value.')
    if not isinstance(fixed, bool):
        raise TypeError('`fixed` must be either True or False')
    self._independent = True
    self._fixed = fixed  # For fitting, but must be initialized before super().__init__
    self._min = sc.scalar(float(min), unit=unit)
    self._max = sc.scalar(float(max), unit=unit)

    super().__init__(
        name=name,
        value=value,
        unit=unit,
        variance=variance,
        unique_name=unique_name,
        description=description,
        url=url,
        display_name=display_name,
        parent=parent,
        **kwargs,  # Additional keyword arguments (used for (de)serialization)
    )

    self._callback = callback  # Callback is used by interface to link to model
    if self._callback.fdel is not None:
        weakref.finalize(self, self._callback.fdel)

    # Create additional fitting elements
    self._initial_scalar = copy.deepcopy(self._scalar)

from_dependency classmethod

from_dependency(
    name,
    dependency_expression,
    dependency_map=None,
    **kwargs,
)

Create a dependent Parameter directly from a dependency expression.

:param name: The name of the parameter :param dependency_expression: The dependency expression to evaluate. This should be a string which can be evaluated by the ASTEval interpreter. :param dependency_map: A dictionary of dependency expression symbol name and dependency object pairs. This is inserted into the asteval interpreter to resolve dependencies. :param kwargs: Additional keyword arguments to pass to the Parameter constructor. :return: A new dependent Parameter object.

Source code in src/easyscience/variable/parameter.py
@classmethod
def from_dependency(
    cls, name: str, dependency_expression: str, dependency_map: Optional[dict] = None, **kwargs
) -> Parameter:  # noqa: E501
    """
    Create a dependent Parameter directly from a dependency expression.

    :param name: The name of the parameter
    :param dependency_expression: The dependency expression to evaluate. This should be a string which can be evaluated by the ASTEval interpreter.
    :param dependency_map: A dictionary of dependency expression symbol name and dependency object pairs. This is inserted into the asteval interpreter to resolve dependencies.
    :param kwargs: Additional keyword arguments to pass to the Parameter constructor.
    :return: A new dependent Parameter object.
    """  # noqa: E501
    # Set default values for required parameters for the constructor, they get overwritten by the dependency anyways
    default_kwargs = {'value': 0.0, 'unit': '', 'variance': 0.0, 'min': -np.inf, 'max': np.inf}
    # Update with user-provided kwargs, to avoid errors.
    default_kwargs.update(kwargs)
    parameter = cls(name=name, **default_kwargs)
    parameter.make_dependent_on(dependency_expression=dependency_expression, dependency_map=dependency_map)
    return parameter

_update

_update()

Update the parameter. This is called by the DescriptorNumbers/Parameters who have this Parameter as a dependency.

Source code in src/easyscience/variable/parameter.py
def _update(self) -> None:
    """
    Update the parameter. This is called by the DescriptorNumbers/Parameters who have this Parameter as a dependency.
    """
    if not self._independent:
        # Update the value of the parameter using the dependency interpreter
        temporary_parameter = self._dependency_interpreter(self._clean_dependency_string)
        self._scalar.value = temporary_parameter.value
        self._scalar.unit = temporary_parameter.unit
        self._scalar.variance = temporary_parameter.variance
        self._min.value = (
            temporary_parameter.min if isinstance(temporary_parameter, Parameter) else temporary_parameter.value
        )  # noqa: E501
        self._max.value = (
            temporary_parameter.max if isinstance(temporary_parameter, Parameter) else temporary_parameter.value
        )  # noqa: E501
        self._min.unit = temporary_parameter.unit
        self._max.unit = temporary_parameter.unit
        self._notify_observers()
    else:
        warnings.warn('This parameter is not dependent. It cannot be updated.')

make_dependent_on

make_dependent_on(
    dependency_expression, dependency_map=None
)

Make this parameter dependent on another parameter. This will overwrite the current value, unit, variance, min and max.

How to use the dependency map: If a parameter c has a dependency expression of 'a + b', where a and b are parameters belonging to the model class, then the dependency map needs to have the form {'a': model.a, 'b': model.b}, where model is the model class. I.e. the values are the actual objects, whereas the keys are how they are represented in the dependency expression.

The dependency map is not needed if the dependency expression uses the unique names of the parameters. Unique names in dependency expressions are defined by quotes, e.g. 'Parameter_0' or "Parameter_0" depending on the quotes used for the expression.

:param dependency_expression: The dependency expression to evaluate. This should be a string which can be evaluated by a python interpreter.

:param dependency_map: A dictionary of dependency expression symbol name and dependency object pairs. This is inserted into the asteval interpreter to resolve dependencies.

Source code in src/easyscience/variable/parameter.py
def make_dependent_on(self, dependency_expression: str, dependency_map: Optional[dict] = None) -> None:
    """
    Make this parameter dependent on another parameter. This will overwrite the current value, unit, variance, min and max.

    How to use the dependency map:
    If a parameter c has a dependency expression of 'a + b', where a and b are parameters belonging to the model class,
    then the dependency map needs to have the form {'a': model.a, 'b': model.b}, where model is the model class.
    I.e. the values are the actual objects, whereas the keys are how they are represented in the dependency expression.

    The dependency map is not needed if the dependency expression uses the unique names of the parameters.
    Unique names in dependency expressions are defined by quotes, e.g. 'Parameter_0' or "Parameter_0" depending on
    the quotes used for the expression.

    :param dependency_expression:
        The dependency expression to evaluate. This should be a string which
        can be evaluated by a python interpreter.

    :param dependency_map:
        A dictionary of dependency expression symbol name and dependency object pairs.
        This is inserted into the asteval interpreter to resolve dependencies.

    """  # noqa: E501
    if not isinstance(dependency_expression, str):
        raise TypeError('`dependency_expression` must be a string representing a valid dependency expression.')
    if not (isinstance(dependency_map, dict) or dependency_map is None):
        raise TypeError(
            '`dependency_map` must be a dictionary of dependencies and their'
            'corresponding names in the dependecy expression.'
        )  # noqa: E501
    if isinstance(dependency_map, dict):
        for key, value in dependency_map.items():
            if not isinstance(key, str):
                raise TypeError(
                    '`dependency_map` keys must be strings representing the names of'
                    'the dependencies in the dependency expression.'
                )  # noqa: E501
            if not isinstance(value, DescriptorNumber):
                raise TypeError(
                    f'`dependency_map` values must be DescriptorNumbers or Parameters. Got {type(value)} for {key}.'
                )  # noqa: E501

    # If we're overwriting the dependency, store the old attributes
    # in case we need to revert back to the old dependency
    self._previous_independent = self._independent
    if not self._independent:
        self._previous_dependency = {
            '_dependency_string': self._dependency_string,
            '_dependency_map': self._dependency_map,
            '_dependency_interpreter': self._dependency_interpreter,
            '_clean_dependency_string': self._clean_dependency_string,
        }
        for dependency in self._dependency_map.values():
            dependency._detach_observer(self)

    self._independent = False
    self._dependency_string = dependency_expression
    self._dependency_map = dependency_map if dependency_map is not None else {}
    # List of allowed python constructs for the asteval interpreter
    asteval_config = {
        'import': False,
        'importfrom': False,
        'assert': False,
        'augassign': False,
        'delete': False,
        'if': True,
        'ifexp': True,
        'for': False,
        'formattedvalue': False,
        'functiondef': False,
        'print': False,
        'raise': False,
        'listcomp': False,
        'dictcomp': False,
        'setcomp': False,
        'try': False,
        'while': False,
        'with': False,
    }
    self._dependency_interpreter = Interpreter(config=asteval_config)

    # Process the dependency expression for unique names
    try:
        self._process_dependency_unique_names(self._dependency_string)
    except ValueError as error:
        self._revert_dependency(skip_detach=True)
        raise error

    for key, value in self._dependency_map.items():
        self._dependency_interpreter.symtable[key] = value
        self._dependency_interpreter.readonly_symbols.add(
            key
        )  # Dont allow overwriting of the dependencies in the dependency expression  # noqa: E501
        value._attach_observer(self)
    # Check the dependency expression for errors
    try:
        dependency_result = self._dependency_interpreter.eval(self._clean_dependency_string, raise_errors=True)
    except NameError as message:
        self._revert_dependency()
        raise NameError(
            '\nUnknown name encountered in dependecy expression:'
            + '\n'
            + '\n'.join(str(message).split('\n')[1:])
            + '\nPlease check your expression or add the name to the `dependency_map`'
        ) from None
    except Exception as message:
        self._revert_dependency()
        raise SyntaxError(
            '\nError encountered in dependecy expression:'
            + '\n'
            + '\n'.join(str(message).split('\n')[1:])
            + '\nPlease check your expression'
        ) from None
    if not isinstance(dependency_result, DescriptorNumber):
        error_string = self._dependency_string
        self._revert_dependency()
        raise TypeError(
            f'The dependency expression: "{error_string}" returned a {type(dependency_result)},'
            'it should return a Parameter or DescriptorNumber.'
        )  # noqa: E501
    # Check for cyclic dependencies
    try:
        self._validate_dependencies()
    except RuntimeError as error:
        self._revert_dependency()
        raise error
    # Update the parameter with the dependency result
    self._fixed = False
    self._update()

make_independent

make_independent()

Make this parameter independent. This will remove the dependency expression, the dependency map and the dependency interpreter.

:return: None

Source code in src/easyscience/variable/parameter.py
def make_independent(self) -> None:
    """
    Make this parameter independent.
    This will remove the dependency expression, the dependency map and the dependency interpreter.

    :return: None
    """
    if not self._independent:
        for dependency in self._dependency_map.values():
            dependency._detach_observer(self)
        self._independent = True
        del self._dependency_map
        del self._dependency_interpreter
        del self._dependency_string
        del self._clean_dependency_string
    else:
        raise AttributeError('This parameter is already independent.')

variance

variance(variance_float)

Set the variance.

:param variance_float: Variance as a float

Source code in src/easyscience/variable/parameter.py
@DescriptorNumber.variance.setter
def variance(self, variance_float: float) -> None:
    """
    Set the variance.

    :param variance_float: Variance as a float
    """
    if self._independent:
        DescriptorNumber.variance.fset(self, variance_float)
    else:
        raise AttributeError('This is a dependent parameter, its variance cannot be set directly.')

error

error(value)

Set the standard deviation for the parameter.

:param value: New error value

Source code in src/easyscience/variable/parameter.py
@DescriptorNumber.error.setter
def error(self, value: float) -> None:
    """
    Set the standard deviation for the parameter.

    :param value: New error value
    """
    if self._independent:
        DescriptorNumber.error.fset(self, value)
    else:
        raise AttributeError('This is a dependent parameter, its error cannot be set directly.')

_convert_unit

_convert_unit(unit_str)

Perform unit conversion. The value, max and min can change on unit change.

:param new_unit: new unit :return: None

Source code in src/easyscience/variable/parameter.py
def _convert_unit(self, unit_str: str) -> None:
    """
    Perform unit conversion. The value, max and min can change on unit change.

    :param new_unit: new unit
    :return: None
    """
    super()._convert_unit(unit_str=unit_str)
    new_unit = sc.Unit(unit_str)  # unit_str is tested in super method
    self._min = self._min.to(unit=new_unit)
    self._max = self._max.to(unit=new_unit)

convert_unit

convert_unit(unit_str)

Perform unit conversion. The value, max and min can change on unit change.

:param new_unit: new unit :return: None

Source code in src/easyscience/variable/parameter.py
@notify_observers
def convert_unit(self, unit_str: str) -> None:
    """
    Perform unit conversion. The value, max and min can change on unit change.

    :param new_unit: new unit
    :return: None
    """
    self._convert_unit(unit_str)

as_dict

as_dict(skip=None)

Overwrite the as_dict method to handle dependency information.

Source code in src/easyscience/variable/parameter.py
def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
    """Overwrite the as_dict method to handle dependency information."""
    raw_dict = super().as_dict(skip=skip)

    # Add dependency information for dependent parameters
    if not self._independent:
        # Save the dependency expression
        raw_dict['_dependency_string'] = self._clean_dependency_string

        # Mark that this parameter is dependent
        raw_dict['_independent'] = self._independent

        # Convert dependency_map to use serializer_ids
        raw_dict['_dependency_map_serializer_ids'] = {}
        for key, obj in self._dependency_map.items():
            raw_dict['_dependency_map_serializer_ids'][key] = obj._DescriptorNumber__serializer_id

    return raw_dict

_revert_dependency

_revert_dependency(skip_detach=False)

Revert the dependency to the old dependency. This is used when an error is raised during setting the dependency.

Source code in src/easyscience/variable/parameter.py
def _revert_dependency(self, skip_detach=False) -> None:
    """
    Revert the dependency to the old dependency. This is used when an error is raised during setting the dependency.
    """
    if self._previous_independent is True:
        self.make_independent()
    else:
        if not skip_detach:
            for dependency in self._dependency_map.values():
                dependency._detach_observer(self)
        for key, value in self._previous_dependency.items():
            setattr(self, key, value)
        for dependency in self._dependency_map.values():
            dependency._attach_observer(self)
        del self._previous_dependency
    del self._previous_independent

_process_dependency_unique_names

_process_dependency_unique_names(dependency_expression)

Add the unique names of the parameters to the ASTEval interpreter. This is used to evaluate the dependency expression.

:param dependency_expression: The dependency expression to be evaluated

Source code in src/easyscience/variable/parameter.py
def _process_dependency_unique_names(self, dependency_expression: str):
    """
    Add the unique names of the parameters to the ASTEval interpreter. This is used to evaluate the dependency expression.

    :param dependency_expression: The dependency expression to be evaluated
    """
    # Get the unique_names from the expression string regardless of the quotes used
    inputted_unique_names = re.findall("('.+?')", dependency_expression)
    inputted_unique_names += re.findall('(".+?")', dependency_expression)

    clean_dependency_string = dependency_expression
    existing_unique_names = self._global_object.map.vertices()
    # Add the unique names of the parameters to the ASTEVAL interpreter
    for name in inputted_unique_names:
        stripped_name = name.strip('\'"')
        if stripped_name not in existing_unique_names:
            raise ValueError(
                f'A Parameter with unique_name {stripped_name} does not exist. Please check your dependency expression.'
            )  # noqa: E501
        dependent_parameter = self._global_object.map.get_item_by_key(stripped_name)
        if isinstance(dependent_parameter, DescriptorNumber):
            self._dependency_map['__' + stripped_name + '__'] = dependent_parameter
            clean_dependency_string = clean_dependency_string.replace(name, '__' + stripped_name + '__')
        else:
            raise ValueError(
                f'The object with unique_name {stripped_name} is not a Parameter or DescriptorNumber. '
                'Please check your dependency expression.'
            )  # noqa: E501
    self._clean_dependency_string = clean_dependency_string

from_dict classmethod

from_dict(obj_dict)

Custom deserialization to handle parameter dependencies. Override the parent method to handle dependency information.

Source code in src/easyscience/variable/parameter.py
@classmethod
def from_dict(cls, obj_dict: dict) -> 'Parameter':
    """
    Custom deserialization to handle parameter dependencies.
    Override the parent method to handle dependency information.
    """
    # Extract dependency information before creating the parameter
    raw_dict = obj_dict.copy()  # Don't modify the original dict
    dependency_string = raw_dict.pop('_dependency_string', None)
    dependency_map_serializer_ids = raw_dict.pop('_dependency_map_serializer_ids', None)
    is_independent = raw_dict.pop('_independent', True)
    # Note: Keep _serializer_id in the dict so it gets passed to __init__

    # Create the parameter using the base class method (serializer_id is now handled in __init__)
    param = super().from_dict(raw_dict)

    # Store dependency information for later resolution
    if not is_independent:
        param._pending_dependency_string = dependency_string
        param._pending_dependency_map_serializer_ids = dependency_map_serializer_ids
        # Keep parameter as independent initially - will be made dependent after all objects are loaded
        param._independent = True

    return param

__copy__

__copy__()
Source code in src/easyscience/variable/parameter.py
def __copy__(self) -> Parameter:
    new_obj = super().__copy__()
    new_obj._callback = property()
    return new_obj

__repr__

__repr__()

Return printable representation of a Parameter object.

Source code in src/easyscience/variable/parameter.py
def __repr__(self) -> str:
    """
    Return printable representation of a Parameter object.
    """
    super_str = super().__repr__()
    super_str = super_str[:-1]
    s = []
    if self.fixed:
        super_str += ' (fixed)'
    s.append(super_str)
    s.append('bounds=[%s:%s]' % (repr(float(self.min)), repr(float(self.max))))
    return '%s>' % ', '.join(s)

__add__

__add__(other)
Source code in src/easyscience/variable/parameter.py
def __add__(self, other: Union[DescriptorNumber, Parameter, numbers.Number]) -> Parameter:
    if isinstance(other, numbers.Number):
        if self.unit != 'dimensionless':
            raise UnitError('Numbers can only be added to dimensionless values')
        new_full_value = self.full_value + other
        min_value = self.min + other
        max_value = self.max + other
    elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
        other_unit = other.unit
        try:
            other._convert_unit(self.unit)
        except UnitError:
            raise UnitError(f'Values with units {self.unit} and {other.unit} cannot be added') from None
        new_full_value = self.full_value + other.full_value
        min_value = self.min + other.min if isinstance(other, Parameter) else self.min + other.value
        max_value = self.max + other.max if isinstance(other, Parameter) else self.max + other.value
        other._convert_unit(other_unit)
    else:
        return NotImplemented
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter.name = parameter.unique_name
    return parameter

__radd__

__radd__(other)
Source code in src/easyscience/variable/parameter.py
def __radd__(self, other: Union[DescriptorNumber, numbers.Number]) -> Parameter:
    if isinstance(other, numbers.Number):
        if self.unit != 'dimensionless':
            raise UnitError('Numbers can only be added to dimensionless values')
        new_full_value = self.full_value + other
        min_value = self.min + other
        max_value = self.max + other
    elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
        original_unit = self.unit
        try:
            self._convert_unit(other.unit)
        except UnitError:
            raise UnitError(f'Values with units {other.unit} and {self.unit} cannot be added') from None
        new_full_value = self.full_value + other.full_value
        min_value = self.min + other.value
        max_value = self.max + other.value
        self._convert_unit(original_unit)
    else:
        return NotImplemented
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter.name = parameter.unique_name
    return parameter

__sub__

__sub__(other)
Source code in src/easyscience/variable/parameter.py
def __sub__(self, other: Union[DescriptorNumber, Parameter, numbers.Number]) -> Parameter:
    if isinstance(other, numbers.Number):
        if self.unit != 'dimensionless':
            raise UnitError('Numbers can only be subtracted from dimensionless values')
        new_full_value = self.full_value - other
        min_value = self.min - other
        max_value = self.max - other
    elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
        other_unit = other.unit
        try:
            other._convert_unit(self.unit)
        except UnitError:
            raise UnitError(f'Values with units {self.unit} and {other.unit} cannot be subtracted') from None
        new_full_value = self.full_value - other.full_value
        if isinstance(other, Parameter):
            min_value = self.min - other.max if other.max != np.inf else -np.inf
            max_value = self.max - other.min if other.min != -np.inf else np.inf
        else:
            min_value = self.min - other.value
            max_value = self.max - other.value
        other._convert_unit(other_unit)
    else:
        return NotImplemented
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter.name = parameter.unique_name
    return parameter

__rsub__

__rsub__(other)
Source code in src/easyscience/variable/parameter.py
def __rsub__(self, other: Union[DescriptorNumber, numbers.Number]) -> Parameter:
    if isinstance(other, numbers.Number):
        if self.unit != 'dimensionless':
            raise UnitError('Numbers can only be subtracted from dimensionless values')
        new_full_value = other - self.full_value
        min_value = other - self.max
        max_value = other - self.min
    elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
        original_unit = self.unit
        try:
            self._convert_unit(other.unit)
        except UnitError:
            raise UnitError(f'Values with units {other.unit} and {self.unit} cannot be subtracted') from None
        new_full_value = other.full_value - self.full_value
        min_value = other.value - self.max
        max_value = other.value - self.min
        self._convert_unit(original_unit)
    else:
        return NotImplemented
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter.name = parameter.unique_name
    return parameter

__mul__

__mul__(other)
Source code in src/easyscience/variable/parameter.py
def __mul__(self, other: Union[DescriptorNumber, Parameter, numbers.Number]) -> Parameter:
    if isinstance(other, numbers.Number):
        new_full_value = self.full_value * other
        if other == 0:
            descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
            descriptor_number.name = descriptor_number.unique_name
            return descriptor_number
        combinations = [self.min * other, self.max * other]
    elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
        new_full_value = self.full_value * other.full_value
        if (
            other.value == 0 and type(other) is DescriptorNumber
        ):  # Only return DescriptorNumber if other is strictly 0, i.e. not a parameter  # noqa: E501
            descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
            descriptor_number.name = descriptor_number.unique_name
            return descriptor_number
        if isinstance(other, Parameter):
            combinations = []
            for first, second in [
                (self.min, other.min),
                (self.min, other.max),
                (self.max, other.min),
                (self.max, other.max),
            ]:  # noqa: E501
                if first == 0 and np.isinf(second):
                    combinations.append(0)
                elif second == 0 and np.isinf(first):
                    combinations.append(0)
                else:
                    combinations.append(first * second)
        else:
            combinations = [self.min * other.value, self.max * other.value]
    else:
        return NotImplemented
    min_value = min(combinations)
    max_value = max(combinations)
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter._convert_unit(parameter._base_unit())
    parameter.name = parameter.unique_name
    return parameter

__rmul__

__rmul__(other)
Source code in src/easyscience/variable/parameter.py
def __rmul__(self, other: Union[DescriptorNumber, numbers.Number]) -> Parameter:
    if isinstance(other, numbers.Number):
        new_full_value = other * self.full_value
        if other == 0:
            descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
            descriptor_number.name = descriptor_number.unique_name
            return descriptor_number
        combinations = [other * self.min, other * self.max]
    elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
        new_full_value = other.full_value * self.full_value
        if other.value == 0:
            descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
            descriptor_number.name = descriptor_number.unique_name
            return descriptor_number
        combinations = [self.min * other.value, self.max * other.value]
    else:
        return NotImplemented
    min_value = min(combinations)
    max_value = max(combinations)
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter._convert_unit(parameter._base_unit())
    parameter.name = parameter.unique_name
    return parameter

__truediv__

__truediv__(other)
Source code in src/easyscience/variable/parameter.py
def __truediv__(self, other: Union[DescriptorNumber, Parameter, numbers.Number]) -> Parameter:
    if isinstance(other, numbers.Number):
        if other == 0:
            raise ZeroDivisionError('Cannot divide by zero')
        new_full_value = self.full_value / other
        combinations = [self.min / other, self.max / other]
    elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
        other_value = other.value
        if other_value == 0:
            raise ZeroDivisionError('Cannot divide by zero')
        new_full_value = self.full_value / other.full_value
        if isinstance(other, Parameter):
            if other.min < 0 and other.max > 0:
                combinations = [-np.inf, np.inf]
            elif other.min == 0:
                if self.min < 0 and self.max > 0:
                    combinations = [-np.inf, np.inf]
                elif self.min >= 0:
                    combinations = [self.min / other.max, np.inf]
                elif self.max <= 0:
                    combinations = [-np.inf, self.max / other.max]
            elif other.max == 0:
                if self.min < 0 and self.max > 0:
                    combinations = [-np.inf, np.inf]
                elif self.min >= 0:
                    combinations = [-np.inf, self.min / other.min]
                elif self.max <= 0:
                    combinations = [self.max / other.min, np.inf]
            else:
                combinations = [self.min / other.min, self.max / other.max, self.min / other.max, self.max / other.min]
        else:
            combinations = [self.min / other.value, self.max / other.value]
        other.value = other_value
    else:
        return NotImplemented
    min_value = min(combinations)
    max_value = max(combinations)
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter._convert_unit(parameter._base_unit())
    parameter.name = parameter.unique_name
    return parameter

__rtruediv__

__rtruediv__(other)
Source code in src/easyscience/variable/parameter.py
def __rtruediv__(self, other: Union[DescriptorNumber, numbers.Number]) -> Parameter:
    original_self = self.value
    if original_self == 0:
        raise ZeroDivisionError('Cannot divide by zero')
    if isinstance(other, numbers.Number):
        new_full_value = other / self.full_value
        other_value = other
        if other_value == 0:
            descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
            descriptor_number.name = descriptor_number.unique_name
            return descriptor_number
    elif isinstance(other, DescriptorNumber):  # Parameter inherits from DescriptorNumber and is also handled here
        new_full_value = other.full_value / self.full_value
        other_value = other.value
        if other_value == 0:
            descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
            descriptor_number.name = descriptor_number.unique_name
            return descriptor_number
    else:
        return NotImplemented
    if self.min < 0 and self.max > 0:
        combinations = [-np.inf, np.inf]
    elif self.min == 0:
        if other_value > 0:
            combinations = [other_value / self.max, np.inf]
        elif other_value < 0:
            combinations = [-np.inf, other_value / self.max]
    elif self.max == 0:
        if other_value > 0:
            combinations = [-np.inf, other_value / self.min]
        elif other_value < 0:
            combinations = [other_value / self.min, np.inf]
    else:
        combinations = [other_value / self.min, other_value / self.max]
    min_value = min(combinations)
    max_value = max(combinations)
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter._convert_unit(parameter._base_unit())
    parameter.name = parameter.unique_name
    self.value = original_self
    return parameter

__pow__

__pow__(other)
Source code in src/easyscience/variable/parameter.py
def __pow__(self, other: Union[DescriptorNumber, numbers.Number]) -> Parameter:
    if isinstance(other, numbers.Number):
        exponent = other
    elif type(other) is DescriptorNumber:  # Strictly a DescriptorNumber, We can't raise to the power of a Parameter
        if other.unit != 'dimensionless':
            raise UnitError('Exponents must be dimensionless')
        if other.variance is not None:
            raise ValueError('Exponents must not have variance')
        exponent = other.value
    else:
        return NotImplemented

    try:
        new_full_value = self.full_value**exponent
    except Exception as message:
        raise message from None

    if np.isnan(new_full_value.value):
        raise ValueError('The result of the exponentiation is not a number')
    if exponent == 0:
        descriptor_number = DescriptorNumber.from_scipp(name=self.name, full_value=new_full_value)
        descriptor_number.name = descriptor_number.unique_name
        return descriptor_number
    elif exponent < 0:
        if self.min < 0 and self.max > 0:
            combinations = [-np.inf, np.inf]
        elif self.min == 0:
            combinations = [self.max**exponent, np.inf]
        elif self.max == 0:
            combinations = [-np.inf, self.min**exponent]
        else:
            combinations = [self.min**exponent, self.max**exponent]
    else:
        combinations = [self.min**exponent, self.max**exponent]
    if exponent % 2 == 0:
        if self.min < 0 and self.max > 0:
            combinations.append(0)
        combinations = [abs(combination) for combination in combinations]
    elif exponent % 1 != 0:
        if self.min < 0:
            combinations.append(0)
        combinations = [combination for combination in combinations if combination >= 0]
    min_value = min(combinations)
    max_value = max(combinations)
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter.name = parameter.unique_name
    return parameter

__neg__

__neg__()
Source code in src/easyscience/variable/parameter.py
def __neg__(self) -> Parameter:
    new_full_value = -self.full_value
    min_value = -self.max
    max_value = -self.min
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter.name = parameter.unique_name
    return parameter

__abs__

__abs__()
Source code in src/easyscience/variable/parameter.py
def __abs__(self) -> Parameter:
    new_full_value = abs(self.full_value)
    combinations = [abs(self.min), abs(self.max)]
    if self.min < 0 and self.max > 0:
        combinations.append(0.0)
    min_value = min(combinations)
    max_value = max(combinations)
    parameter = Parameter.from_scipp(name=self.name, full_value=new_full_value, min=min_value, max=max_value)
    parameter.name = parameter.unique_name
    return parameter

resolve_pending_dependencies

resolve_pending_dependencies()

Resolve pending dependencies after deserialization.

This method should be called after all parameters have been deserialized to establish dependency relationships using serializer_ids.

Source code in src/easyscience/variable/parameter.py
def resolve_pending_dependencies(self) -> None:
    """Resolve pending dependencies after deserialization.

    This method should be called after all parameters have been deserialized
    to establish dependency relationships using serializer_ids.
    """
    if hasattr(self, '_pending_dependency_string'):
        dependency_string = self._pending_dependency_string
        dependency_map = {}

        if hasattr(self, '_pending_dependency_map_serializer_ids'):
            dependency_map_serializer_ids = self._pending_dependency_map_serializer_ids

            # Build dependency_map by looking up objects by serializer_id
            for key, serializer_id in dependency_map_serializer_ids.items():
                dep_obj = self._find_parameter_by_serializer_id(serializer_id)
                if dep_obj is not None:
                    dependency_map[key] = dep_obj
                else:
                    raise ValueError(f"Cannot find parameter with serializer_id '{serializer_id}'")

        # Establish the dependency relationship
        try:
            self.make_dependent_on(dependency_expression=dependency_string, dependency_map=dependency_map)
        except Exception as e:
            raise ValueError(f"Error establishing dependency '{dependency_string}': {e}")

        # Clean up temporary attributes
        delattr(self, '_pending_dependency_string')
        delattr(self, '_pending_dependency_map_serializer_ids')

_find_parameter_by_serializer_id

_find_parameter_by_serializer_id(serializer_id)

Find a parameter by its serializer_id from all parameters in the global map.

Source code in src/easyscience/variable/parameter.py
def _find_parameter_by_serializer_id(self, serializer_id: str) -> Optional['DescriptorNumber']:
    """Find a parameter by its serializer_id from all parameters in the global map."""
    for obj in self._global_object.map._store.values():
        if isinstance(obj, DescriptorNumber) and hasattr(obj, '_DescriptorNumber__serializer_id'):
            if obj._DescriptorNumber__serializer_id == serializer_id:
                return obj
    return None

The Parameter class extends DescriptorNumber with fitting capabilities, bounds, and dependency relationships.

Base Classes for Models

BasedBase

easyscience.base_classes.BasedBase

Bases: SerializerComponent

Source code in src/easyscience/base_classes/based_base.py
class BasedBase(SerializerComponent):
    __slots__ = ['_name', '_global_object', 'user_data', '_kwargs']

    _REDIRECT = {}

    def __init__(self, name: str, interface: Optional[InterfaceFactoryTemplate] = None, unique_name: Optional[str] = None):
        self._global_object = global_object
        if unique_name is None:
            unique_name = self._global_object.generate_unique_name(self.__class__.__name__)
        self._unique_name = unique_name
        self._name = name
        self._global_object.map.add_vertex(self, obj_type='created')
        self.interface = interface
        self.user_data: dict = {}

    @property
    def _arg_spec(self) -> Set[str]:
        base_cls = getattr(self, '__old_class__', self.__class__)
        sign = signature(base_cls.__init__)
        names = [param.name for param in sign.parameters.values() if param.kind == param.POSITIONAL_OR_KEYWORD]
        return set(names[1:])

    def __reduce__(self):
        """
        Make the class picklable.
        Due to the nature of the dynamic class definitions special measures need to be taken.

        :return: Tuple consisting of how to make the object
        :rtype: tuple
        """
        state = self.encode()
        cls = getattr(self, '__old_class__', self.__class__)
        return cls.from_dict, (state,)

    @property
    def unique_name(self) -> str:
        """Get the unique name of the object."""
        return self._unique_name

    @unique_name.setter
    def unique_name(self, new_unique_name: str):
        """Set a new unique name for the object. The old name is still kept in the map.

        :param new_unique_name: New unique name for the object"""
        if not isinstance(new_unique_name, str):
            raise TypeError('Unique name has to be a string.')
        self._unique_name = new_unique_name
        self._global_object.map.add_vertex(self)

    @property
    def name(self) -> str:
        """
        Get the common name of the object.

        :return: Common name of the object
        """
        return self._name

    @name.setter
    def name(self, new_name: str):
        """
        Set a new common name for the object.

        :param new_name: New name for the object
        :return: None
        """
        self._name = new_name

    @property
    def interface(self) -> InterfaceFactoryTemplate:
        """
        Get the current interface of the object
        """
        return self._interface

    @interface.setter
    def interface(self, new_interface: InterfaceFactoryTemplate):
        """
        Set the current interface to the object and generate bindings if possible. iF.e.
        ```
        def __init__(self, bar, interface=None, **kwargs):
            super().__init__(self, **kwargs)
            self.foo = bar
            self.interface = interface # As final step after initialization to set correct bindings.
        ```
        """
        self._interface = new_interface
        if new_interface is not None:
            self.generate_bindings()

    def generate_bindings(self):
        """
        Generate or re-generate bindings to an interface (if exists)

        :raises: AttributeError
        """
        if self.interface is None:
            raise AttributeError('Interface error for generating bindings. `interface` has to be set.')
        interfaceable_children = [
            key
            for key in self._global_object.map.get_edges(self)
            if issubclass(type(self._global_object.map.get_item_by_key(key)), BasedBase)
        ]
        for child_key in interfaceable_children:
            child = self._global_object.map.get_item_by_key(child_key)
            child.interface = self.interface
        self.interface.generate_bindings(self)

    def switch_interface(self, new_interface_name: str):
        """
        Switch or create a new interface.
        """
        if self.interface is None:
            raise AttributeError('Interface error for generating bindings. `interface` has to be set.')
        self.interface.switch(new_interface_name)
        self.generate_bindings()

    def get_parameters(self) -> List[Parameter]:
        """
        Get all parameter objects as a list.

        :return: List of `Parameter` objects.
        """
        par_list = []
        for key, item in self._kwargs.items():
            if hasattr(item, 'get_parameters'):
                par_list = [*par_list, *item.get_parameters()]
            elif isinstance(item, Parameter):
                par_list.append(item)
        return par_list

    def _get_linkable_attributes(self) -> List[DescriptorBase]:
        """
        Get all objects which can be linked against as a list.

        :return: List of `Descriptor`/`Parameter` objects.
        """
        item_list = []
        for key, item in self._kwargs.items():
            if hasattr(item, '_get_linkable_attributes'):
                item_list = [*item_list, *item._get_linkable_attributes()]
            elif issubclass(type(item), (DescriptorBase)):
                item_list.append(item)
        return item_list

    def get_fit_parameters(self) -> List[Parameter]:
        """
        Get all objects which can be fitted (and are not fixed) as a list.

        :return: List of `Parameter` objects which can be used in fitting.
        """
        fit_list = []
        for key, item in self._kwargs.items():
            if hasattr(item, 'get_fit_parameters'):
                fit_list = [*fit_list, *item.get_fit_parameters()]
            elif isinstance(item, Parameter):
                if item.independent and not item.fixed:
                    fit_list.append(item)
        return fit_list

    def __dir__(self) -> Iterable[str]:
        """
        This creates auto-completion and helps out in iPython notebooks.

        :return: list of function and parameter names for auto-completion
        """
        new_class_objs = list(k for k in dir(self.__class__) if not k.startswith('_'))
        return sorted(new_class_objs)

    def __copy__(self) -> BasedBase:
        """Return a copy of the object."""
        temp = self.as_dict(skip=['unique_name'])
        new_obj = self.__class__.from_dict(temp)
        return new_obj

    def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
        """
        Convert an object into a full dictionary using `SerializerDict`.
        This is a shortcut for ```obj.encode(encoder=SerializerDict)```

        :param skip: List of field names as strings to skip when forming the dictionary
        :return: encoded object containing all information to reform an EasyScience object.
        """
        # extend skip to include unique_name by default
        if skip is None:
            skip = []
        if 'unique_name' not in skip:
            skip.append('unique_name')
        return super().as_dict(skip=skip)

__slots__ class-attribute instance-attribute

__slots__ = [
    '_name',
    '_global_object',
    'user_data',
    '_kwargs',
]

_REDIRECT class-attribute instance-attribute

_REDIRECT = {}

_global_object instance-attribute

_global_object = global_object

_unique_name instance-attribute

_unique_name = unique_name

_name instance-attribute

_name = name

interface property writable

interface

Get the current interface of the object

user_data instance-attribute

user_data = {}

_arg_spec property

_arg_spec

unique_name property writable

unique_name

Get the unique name of the object.

name property writable

name

Get the common name of the object.

:return: Common name of the object

__init__

__init__(name, interface=None, unique_name=None)
Source code in src/easyscience/base_classes/based_base.py
def __init__(self, name: str, interface: Optional[InterfaceFactoryTemplate] = None, unique_name: Optional[str] = None):
    self._global_object = global_object
    if unique_name is None:
        unique_name = self._global_object.generate_unique_name(self.__class__.__name__)
    self._unique_name = unique_name
    self._name = name
    self._global_object.map.add_vertex(self, obj_type='created')
    self.interface = interface
    self.user_data: dict = {}

__reduce__

__reduce__()

Make the class picklable. Due to the nature of the dynamic class definitions special measures need to be taken.

:return: Tuple consisting of how to make the object :rtype: tuple

Source code in src/easyscience/base_classes/based_base.py
def __reduce__(self):
    """
    Make the class picklable.
    Due to the nature of the dynamic class definitions special measures need to be taken.

    :return: Tuple consisting of how to make the object
    :rtype: tuple
    """
    state = self.encode()
    cls = getattr(self, '__old_class__', self.__class__)
    return cls.from_dict, (state,)

generate_bindings

generate_bindings()

Generate or re-generate bindings to an interface (if exists)

:raises: AttributeError

Source code in src/easyscience/base_classes/based_base.py
def generate_bindings(self):
    """
    Generate or re-generate bindings to an interface (if exists)

    :raises: AttributeError
    """
    if self.interface is None:
        raise AttributeError('Interface error for generating bindings. `interface` has to be set.')
    interfaceable_children = [
        key
        for key in self._global_object.map.get_edges(self)
        if issubclass(type(self._global_object.map.get_item_by_key(key)), BasedBase)
    ]
    for child_key in interfaceable_children:
        child = self._global_object.map.get_item_by_key(child_key)
        child.interface = self.interface
    self.interface.generate_bindings(self)

switch_interface

switch_interface(new_interface_name)

Switch or create a new interface.

Source code in src/easyscience/base_classes/based_base.py
def switch_interface(self, new_interface_name: str):
    """
    Switch or create a new interface.
    """
    if self.interface is None:
        raise AttributeError('Interface error for generating bindings. `interface` has to be set.')
    self.interface.switch(new_interface_name)
    self.generate_bindings()

get_parameters

get_parameters()

Get all parameter objects as a list.

:return: List of Parameter objects.

Source code in src/easyscience/base_classes/based_base.py
def get_parameters(self) -> List[Parameter]:
    """
    Get all parameter objects as a list.

    :return: List of `Parameter` objects.
    """
    par_list = []
    for key, item in self._kwargs.items():
        if hasattr(item, 'get_parameters'):
            par_list = [*par_list, *item.get_parameters()]
        elif isinstance(item, Parameter):
            par_list.append(item)
    return par_list

_get_linkable_attributes

_get_linkable_attributes()

Get all objects which can be linked against as a list.

:return: List of Descriptor/Parameter objects.

Source code in src/easyscience/base_classes/based_base.py
def _get_linkable_attributes(self) -> List[DescriptorBase]:
    """
    Get all objects which can be linked against as a list.

    :return: List of `Descriptor`/`Parameter` objects.
    """
    item_list = []
    for key, item in self._kwargs.items():
        if hasattr(item, '_get_linkable_attributes'):
            item_list = [*item_list, *item._get_linkable_attributes()]
        elif issubclass(type(item), (DescriptorBase)):
            item_list.append(item)
    return item_list

get_fit_parameters

get_fit_parameters()

Get all objects which can be fitted (and are not fixed) as a list.

:return: List of Parameter objects which can be used in fitting.

Source code in src/easyscience/base_classes/based_base.py
def get_fit_parameters(self) -> List[Parameter]:
    """
    Get all objects which can be fitted (and are not fixed) as a list.

    :return: List of `Parameter` objects which can be used in fitting.
    """
    fit_list = []
    for key, item in self._kwargs.items():
        if hasattr(item, 'get_fit_parameters'):
            fit_list = [*fit_list, *item.get_fit_parameters()]
        elif isinstance(item, Parameter):
            if item.independent and not item.fixed:
                fit_list.append(item)
    return fit_list

__dir__

__dir__()

This creates auto-completion and helps out in iPython notebooks.

:return: list of function and parameter names for auto-completion

Source code in src/easyscience/base_classes/based_base.py
def __dir__(self) -> Iterable[str]:
    """
    This creates auto-completion and helps out in iPython notebooks.

    :return: list of function and parameter names for auto-completion
    """
    new_class_objs = list(k for k in dir(self.__class__) if not k.startswith('_'))
    return sorted(new_class_objs)

__copy__

__copy__()

Return a copy of the object.

Source code in src/easyscience/base_classes/based_base.py
def __copy__(self) -> BasedBase:
    """Return a copy of the object."""
    temp = self.as_dict(skip=['unique_name'])
    new_obj = self.__class__.from_dict(temp)
    return new_obj

as_dict

as_dict(skip=None)

Convert an object into a full dictionary using SerializerDict. This is a shortcut for obj.encode(encoder=SerializerDict)

:param skip: List of field names as strings to skip when forming the dictionary :return: encoded object containing all information to reform an EasyScience object.

Source code in src/easyscience/base_classes/based_base.py
def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
    """
    Convert an object into a full dictionary using `SerializerDict`.
    This is a shortcut for ```obj.encode(encoder=SerializerDict)```

    :param skip: List of field names as strings to skip when forming the dictionary
    :return: encoded object containing all information to reform an EasyScience object.
    """
    # extend skip to include unique_name by default
    if skip is None:
        skip = []
    if 'unique_name' not in skip:
        skip.append('unique_name')
    return super().as_dict(skip=skip)

Base class providing serialization, global object registration, and interface management.

ObjBase

easyscience.base_classes.ObjBase

Bases: BasedBase

This is the base class for which all higher level classes are built off of. NOTE: This object is serializable only if parameters are supplied as: ObjBase(a=value, b=value). For Parameter or Descriptor objects we can cheat with ObjBase(*[Descriptor(...), Parameter(...), ...]).

Source code in src/easyscience/base_classes/obj_base.py
class ObjBase(BasedBase):
    """
    This is the base class for which all higher level classes are built off of.
    NOTE: This object is serializable only if parameters are supplied as:
    `ObjBase(a=value, b=value)`. For `Parameter` or `Descriptor` objects we can
    cheat with `ObjBase(*[Descriptor(...), Parameter(...), ...])`.
    """

    def __init__(
        self,
        name: str,
        unique_name: Optional[str] = None,
        *args: Optional[SerializerComponent],
        **kwargs: Optional[SerializerComponent],
    ):
        """
        Set up the base class.

        :param name: Name of this object
        :param args: Any arguments?
        :param kwargs: Fields which this class should contain
        """
        super(ObjBase, self).__init__(name=name, unique_name=unique_name)
        # If Parameter or Descriptor is given as arguments...
        for arg in args:
            if issubclass(type(arg), (ObjBase, DescriptorBase)):
                kwargs[getattr(arg, 'name')] = arg
        # Set kwargs, also useful for serialization
        known_keys = self.__dict__.keys()
        self._kwargs = kwargs
        for key in kwargs.keys():
            if key in known_keys:
                raise AttributeError('Kwargs cannot overwrite class attributes in ObjBase.')
            if issubclass(type(kwargs[key]), (BasedBase, DescriptorBase)) or 'CollectionBase' in [
                c.__name__ for c in type(kwargs[key]).__bases__
            ]:
                self._global_object.map.add_edge(self, kwargs[key])
                self._global_object.map.reset_type(kwargs[key], 'created_internal')
            addLoggedProp(
                self,
                key,
                self.__getter(key),
                self.__setter(key),
                get_id=key,
                my_self=self,
                test_class=ObjBase,
            )

    def _add_component(self, key: str, component: SerializerComponent) -> None:
        """
        Dynamically add a component to the class. This is an internal method, though can be called remotely.
        The recommended alternative is to use typing, i.e.

        .. code-block:: python

            class Foo(Bar):
                def __init__(self, foo: Parameter, bar: Parameter):
                    super(Foo, self).__init__(bar=bar)
                    self._add_component("foo", foo)

        :param key: Name of component to be added
        :param component: Component to be added
        :return: None
        """
        self._kwargs[key] = component
        self._global_object.map.add_edge(self, component)
        self._global_object.map.reset_type(component, 'created_internal')
        addLoggedProp(
            self,
            key,
            self.__getter(key),
            self.__setter(key),
            get_id=key,
            my_self=self,
            test_class=ObjBase,
        )

    def __setattr__(self, key: str, value: SerializerComponent) -> None:
        # Assume that the annotation is a ClassVar
        old_obj = None
        if (
            hasattr(self.__class__, '__annotations__')
            and key in self.__class__.__annotations__
            and hasattr(self.__class__.__annotations__[key], '__args__')
            and issubclass(
                getattr(value, '__old_class__', value.__class__),
                self.__class__.__annotations__[key].__args__,
            )
        ):
            if issubclass(type(getattr(self, key, None)), (BasedBase, DescriptorBase)):
                old_obj = self.__getattribute__(key)
                self._global_object.map.prune_vertex_from_edge(self, old_obj)
            self._add_component(key, value)
        else:
            if hasattr(self, key) and issubclass(type(value), (BasedBase, DescriptorBase)):
                old_obj = self.__getattribute__(key)
                self._global_object.map.prune_vertex_from_edge(self, old_obj)
                self._global_object.map.add_edge(self, value)
        super(ObjBase, self).__setattr__(key, value)
        # Update the interface bindings if something changed (BasedBase and Descriptor)
        if old_obj is not None:
            old_interface = getattr(self, 'interface', None)
            if old_interface is not None:
                self.generate_bindings()

    def __repr__(self) -> str:
        return f'{self.__class__.__name__} `{getattr(self, "name")}`'

    @staticmethod
    def __getter(key: str) -> Callable[[SerializerComponent], SerializerComponent]:
        def getter(obj: SerializerComponent) -> SerializerComponent:
            return obj._kwargs[key]

        return getter

    @staticmethod
    def __setter(key: str) -> Callable[[SerializerComponent], None]:
        def setter(obj: SerializerComponent, value: float) -> None:
            if issubclass(obj._kwargs[key].__class__, (DescriptorBase)) and not issubclass(value.__class__, (DescriptorBase)):
                obj._kwargs[key].value = value
            else:
                obj._kwargs[key] = value

        return setter

_kwargs instance-attribute

_kwargs = kwargs

__init__

__init__(name, unique_name=None, *args, **kwargs)

Set up the base class.

:param name: Name of this object :param args: Any arguments? :param kwargs: Fields which this class should contain

Source code in src/easyscience/base_classes/obj_base.py
def __init__(
    self,
    name: str,
    unique_name: Optional[str] = None,
    *args: Optional[SerializerComponent],
    **kwargs: Optional[SerializerComponent],
):
    """
    Set up the base class.

    :param name: Name of this object
    :param args: Any arguments?
    :param kwargs: Fields which this class should contain
    """
    super(ObjBase, self).__init__(name=name, unique_name=unique_name)
    # If Parameter or Descriptor is given as arguments...
    for arg in args:
        if issubclass(type(arg), (ObjBase, DescriptorBase)):
            kwargs[getattr(arg, 'name')] = arg
    # Set kwargs, also useful for serialization
    known_keys = self.__dict__.keys()
    self._kwargs = kwargs
    for key in kwargs.keys():
        if key in known_keys:
            raise AttributeError('Kwargs cannot overwrite class attributes in ObjBase.')
        if issubclass(type(kwargs[key]), (BasedBase, DescriptorBase)) or 'CollectionBase' in [
            c.__name__ for c in type(kwargs[key]).__bases__
        ]:
            self._global_object.map.add_edge(self, kwargs[key])
            self._global_object.map.reset_type(kwargs[key], 'created_internal')
        addLoggedProp(
            self,
            key,
            self.__getter(key),
            self.__setter(key),
            get_id=key,
            my_self=self,
            test_class=ObjBase,
        )

_add_component

_add_component(key, component)

Dynamically add a component to the class. This is an internal method, though can be called remotely. The recommended alternative is to use typing, i.e.

.. code-block:: python

class Foo(Bar):
    def __init__(self, foo: Parameter, bar: Parameter):
        super(Foo, self).__init__(bar=bar)
        self._add_component("foo", foo)

:param key: Name of component to be added :param component: Component to be added :return: None

Source code in src/easyscience/base_classes/obj_base.py
def _add_component(self, key: str, component: SerializerComponent) -> None:
    """
    Dynamically add a component to the class. This is an internal method, though can be called remotely.
    The recommended alternative is to use typing, i.e.

    .. code-block:: python

        class Foo(Bar):
            def __init__(self, foo: Parameter, bar: Parameter):
                super(Foo, self).__init__(bar=bar)
                self._add_component("foo", foo)

    :param key: Name of component to be added
    :param component: Component to be added
    :return: None
    """
    self._kwargs[key] = component
    self._global_object.map.add_edge(self, component)
    self._global_object.map.reset_type(component, 'created_internal')
    addLoggedProp(
        self,
        key,
        self.__getter(key),
        self.__setter(key),
        get_id=key,
        my_self=self,
        test_class=ObjBase,
    )

__setattr__

__setattr__(key, value)
Source code in src/easyscience/base_classes/obj_base.py
def __setattr__(self, key: str, value: SerializerComponent) -> None:
    # Assume that the annotation is a ClassVar
    old_obj = None
    if (
        hasattr(self.__class__, '__annotations__')
        and key in self.__class__.__annotations__
        and hasattr(self.__class__.__annotations__[key], '__args__')
        and issubclass(
            getattr(value, '__old_class__', value.__class__),
            self.__class__.__annotations__[key].__args__,
        )
    ):
        if issubclass(type(getattr(self, key, None)), (BasedBase, DescriptorBase)):
            old_obj = self.__getattribute__(key)
            self._global_object.map.prune_vertex_from_edge(self, old_obj)
        self._add_component(key, value)
    else:
        if hasattr(self, key) and issubclass(type(value), (BasedBase, DescriptorBase)):
            old_obj = self.__getattribute__(key)
            self._global_object.map.prune_vertex_from_edge(self, old_obj)
            self._global_object.map.add_edge(self, value)
    super(ObjBase, self).__setattr__(key, value)
    # Update the interface bindings if something changed (BasedBase and Descriptor)
    if old_obj is not None:
        old_interface = getattr(self, 'interface', None)
        if old_interface is not None:
            self.generate_bindings()

__repr__

__repr__()
Source code in src/easyscience/base_classes/obj_base.py
def __repr__(self) -> str:
    return f'{self.__class__.__name__} `{getattr(self, "name")}`'

__getter staticmethod

__getter(key)
Source code in src/easyscience/base_classes/obj_base.py
@staticmethod
def __getter(key: str) -> Callable[[SerializerComponent], SerializerComponent]:
    def getter(obj: SerializerComponent) -> SerializerComponent:
        return obj._kwargs[key]

    return getter

__setter staticmethod

__setter(key)
Source code in src/easyscience/base_classes/obj_base.py
@staticmethod
def __setter(key: str) -> Callable[[SerializerComponent], None]:
    def setter(obj: SerializerComponent, value: float) -> None:
        if issubclass(obj._kwargs[key].__class__, (DescriptorBase)) and not issubclass(value.__class__, (DescriptorBase)):
            obj._kwargs[key].value = value
        else:
            obj._kwargs[key] = value

    return setter

Container class for creating scientific models with parameters. All user-defined models should inherit from this class.

Collections

easyscience.base_classes.CollectionBase

Bases: BasedBase, MutableSequence

This is the base class for which all higher level classes are built off of. NOTE: This object is serializable only if parameters are supplied as: ObjBase(a=value, b=value). For Parameter or Descriptor objects we can cheat with ObjBase(*[Descriptor(...), Parameter(...), ...]).

Source code in src/easyscience/base_classes/collection_base.py
class CollectionBase(BasedBase, MutableSequence):
    """
    This is the base class for which all higher level classes are built off of.
    NOTE: This object is serializable only if parameters are supplied as:
    `ObjBase(a=value, b=value)`. For `Parameter` or `Descriptor` objects we can
    cheat with `ObjBase(*[Descriptor(...), Parameter(...), ...])`.
    """

    def __init__(
        self,
        name: str,
        *args: Union[BasedBase, DescriptorBase],
        interface: Optional[InterfaceFactoryTemplate] = None,
        unique_name: Optional[str] = None,
        **kwargs,
    ):
        """
        Set up the base collection class.

        :param name: Name of this object
        :type name: str
        :param args: selection of
        :param _kwargs: Fields which this class should contain
        :type _kwargs: dict
        """
        BasedBase.__init__(self, name, unique_name=unique_name)
        kwargs = {key: kwargs[key] for key in kwargs.keys() if kwargs[key] is not None}
        _args = []
        for item in args:
            if not isinstance(item, list):
                _args.append(item)
            else:
                _args += item
        _kwargs = {}
        for key, item in kwargs.items():
            if isinstance(item, list) and len(item) > 0:
                _args += item
            else:
                _kwargs[key] = item
        kwargs = _kwargs
        for item in list(kwargs.values()) + _args:
            if not issubclass(type(item), (DescriptorBase, BasedBase)):
                raise AttributeError('A collection can only be formed from easyscience objects.')
        args = _args
        _kwargs = {}
        for key, item in kwargs.items():
            _kwargs[key] = item
        for arg in args:
            kwargs[arg.unique_name] = arg
            _kwargs[arg.unique_name] = arg

        # Set kwargs, also useful for serialization
        self._kwargs = NotarizedDict(**_kwargs)

        for key in kwargs.keys():
            if key in self.__dict__.keys() or key in self.__slots__:
                raise AttributeError(f'Given kwarg: `{key}`, is an internal attribute. Please rename.')
            if kwargs[key]:  # Might be None (empty tuple or list)
                self._global_object.map.add_edge(self, kwargs[key])
                self._global_object.map.reset_type(kwargs[key], 'created_internal')
                if interface is not None:
                    kwargs[key].interface = interface
            # TODO wrap getter and setter in Logger
        if interface is not None:
            self.interface = interface
        self._kwargs._stack_enabled = True

    def insert(self, index: int, value: Union[DescriptorBase, BasedBase]) -> None:
        """
        Insert an object into the collection at an index.

        :param index: Index for EasyScience object to be inserted.
        :type index: int
        :param value: Object to be inserted.
        :type value: Union[BasedBase, DescriptorBase]
        :return: None
        :rtype: None
        """
        t_ = type(value)
        if issubclass(t_, (BasedBase, DescriptorBase)):
            update_key = list(self._kwargs.keys())
            values = list(self._kwargs.values())
            # Update the internal dict
            new_key = value.unique_name
            update_key.insert(index, new_key)
            values.insert(index, value)
            self._kwargs.reorder(**{k: v for k, v in zip(update_key, values)})
            # ADD EDGE
            self._global_object.map.add_edge(self, value)
            self._global_object.map.reset_type(value, 'created_internal')
            value.interface = self.interface
        else:
            raise AttributeError('Only EasyScience objects can be put into an EasyScience group')

    def __getitem__(self, idx: Union[int, slice]) -> Union[DescriptorBase, BasedBase]:
        """
        Get an item in the collection based on its index.

        :param idx: index or slice of the collection.
        :type idx: Union[int, slice]
        :return: Object at index `idx`
        :rtype: Union[Parameter, Descriptor, ObjBase, 'CollectionBase']
        """
        if isinstance(idx, slice):
            start, stop, step = idx.indices(len(self))
            return self.__class__(getattr(self, 'name'), *[self[i] for i in range(start, stop, step)])
        if str(idx) in self._kwargs.keys():
            return self._kwargs[str(idx)]
        if isinstance(idx, str):
            idx = [index for index, item in enumerate(self) if item.name == idx]
            noi = len(idx)
            if noi == 0:
                raise IndexError('Given index does not exist')
            elif noi == 1:
                idx = idx[0]
            else:
                return self.__class__(getattr(self, 'name'), *[self[i] for i in idx])
        elif not isinstance(idx, int) or isinstance(idx, bool):
            if isinstance(idx, bool):
                raise TypeError('Boolean indexing is not supported at the moment')
            try:
                if idx > len(self):
                    raise IndexError(f'Given index {idx} is out of bounds')
            except TypeError:
                raise IndexError('Index must be of type `int`/`slice` or an item name (`str`)')
        keys = list(self._kwargs.keys())
        return self._kwargs[keys[idx]]

    def __setitem__(self, key: int, value: Union[BasedBase, DescriptorBase]) -> None:
        """
        Set an item via it's index.

        :param key: Index in self.
        :type key: int
        :param value: Value which index key should be set to.
        :type value: Any
        """
        if isinstance(value, Number):  # noqa: S3827
            item = self.__getitem__(key)
            item.value = value
        elif issubclass(type(value), (BasedBase, DescriptorBase)):
            update_key = list(self._kwargs.keys())
            values = list(self._kwargs.values())
            old_item = values[key]
            # Update the internal dict
            update_dict = {update_key[key]: value}
            self._kwargs.update(update_dict)
            # ADD EDGE
            self._global_object.map.add_edge(self, value)
            self._global_object.map.reset_type(value, 'created_internal')
            value.interface = self.interface
            # REMOVE EDGE
            self._global_object.map.prune_vertex_from_edge(self, old_item)
        else:
            raise NotImplementedError('At the moment only numerical values or EasyScience objects can be set.')

    def __delitem__(self, key: int) -> None:
        """
        Try to delete  an idem by key.

        :param key:
        :type key:
        :return:
        :rtype:
        """
        keys = list(self._kwargs.keys())
        item = self._kwargs[keys[key]]
        self._global_object.map.prune_vertex_from_edge(self, item)
        del self._kwargs[keys[key]]

    def __len__(self) -> int:
        """
        Get the number of items in this collection

        :return: Number of items in this collection.
        :rtype: int
        """
        return len(self._kwargs.keys())

    def _convert_to_dict(self, in_dict, encoder, skip: List[str] = [], **kwargs) -> dict:
        """
        Convert ones self into a serialized form.

        :return: dictionary of ones self
        :rtype: dict
        """
        d = {}
        if hasattr(self, '_modify_dict'):
            # any extra keys defined on the inheriting class
            d = self._modify_dict(skip=skip, **kwargs)
        in_dict['data'] = [encoder._convert_to_dict(item, skip=skip, **kwargs) for item in self]
        out_dict = {**in_dict, **d}
        return out_dict

    @property
    def data(self) -> Tuple:
        """
        The data function returns a tuple of the keyword arguments passed to the
        constructor. This is useful for when you need to pass in a dictionary of data
        to other functions, such as with matplotlib's plot function.

        :param self: Access attributes of the class within the method
        :return: The values of the attributes in a tuple
        :doc-author: Trelent
        """
        return tuple(self._kwargs.values())

    def __repr__(self) -> str:
        return f'{self.__class__.__name__} `{getattr(self, "name")}` of length {len(self)}'

    def sort(self, mapping: Callable[[Union[BasedBase, DescriptorBase]], Any], reverse: bool = False) -> None:
        """
        Sort the collection according to the given mapping.

        :param mapping: mapping function to sort the collection. i.e. lambda parameter: parameter.value
        :type mapping: Callable
        :param reverse: Reverse the sorting.
        :type reverse: bool
        """
        i = list(self._kwargs.items())
        i.sort(key=lambda x: mapping(x[1]), reverse=reverse)
        self._kwargs.reorder(**{k[0]: k[1] for k in i})

_kwargs instance-attribute

_kwargs = NotarizedDict(**_kwargs)

interface instance-attribute

interface = interface

data property

data

The data function returns a tuple of the keyword arguments passed to the constructor. This is useful for when you need to pass in a dictionary of data to other functions, such as with matplotlib's plot function.

:param self: Access attributes of the class within the method :return: The values of the attributes in a tuple :doc-author: Trelent

__init__

__init__(
    name, *args, interface=None, unique_name=None, **kwargs
)

Set up the base collection class.

:param name: Name of this object :type name: str :param args: selection of :param _kwargs: Fields which this class should contain :type _kwargs: dict

Source code in src/easyscience/base_classes/collection_base.py
def __init__(
    self,
    name: str,
    *args: Union[BasedBase, DescriptorBase],
    interface: Optional[InterfaceFactoryTemplate] = None,
    unique_name: Optional[str] = None,
    **kwargs,
):
    """
    Set up the base collection class.

    :param name: Name of this object
    :type name: str
    :param args: selection of
    :param _kwargs: Fields which this class should contain
    :type _kwargs: dict
    """
    BasedBase.__init__(self, name, unique_name=unique_name)
    kwargs = {key: kwargs[key] for key in kwargs.keys() if kwargs[key] is not None}
    _args = []
    for item in args:
        if not isinstance(item, list):
            _args.append(item)
        else:
            _args += item
    _kwargs = {}
    for key, item in kwargs.items():
        if isinstance(item, list) and len(item) > 0:
            _args += item
        else:
            _kwargs[key] = item
    kwargs = _kwargs
    for item in list(kwargs.values()) + _args:
        if not issubclass(type(item), (DescriptorBase, BasedBase)):
            raise AttributeError('A collection can only be formed from easyscience objects.')
    args = _args
    _kwargs = {}
    for key, item in kwargs.items():
        _kwargs[key] = item
    for arg in args:
        kwargs[arg.unique_name] = arg
        _kwargs[arg.unique_name] = arg

    # Set kwargs, also useful for serialization
    self._kwargs = NotarizedDict(**_kwargs)

    for key in kwargs.keys():
        if key in self.__dict__.keys() or key in self.__slots__:
            raise AttributeError(f'Given kwarg: `{key}`, is an internal attribute. Please rename.')
        if kwargs[key]:  # Might be None (empty tuple or list)
            self._global_object.map.add_edge(self, kwargs[key])
            self._global_object.map.reset_type(kwargs[key], 'created_internal')
            if interface is not None:
                kwargs[key].interface = interface
        # TODO wrap getter and setter in Logger
    if interface is not None:
        self.interface = interface
    self._kwargs._stack_enabled = True

insert

insert(index, value)

Insert an object into the collection at an index.

:param index: Index for EasyScience object to be inserted. :type index: int :param value: Object to be inserted. :type value: Union[BasedBase, DescriptorBase] :return: None :rtype: None

Source code in src/easyscience/base_classes/collection_base.py
def insert(self, index: int, value: Union[DescriptorBase, BasedBase]) -> None:
    """
    Insert an object into the collection at an index.

    :param index: Index for EasyScience object to be inserted.
    :type index: int
    :param value: Object to be inserted.
    :type value: Union[BasedBase, DescriptorBase]
    :return: None
    :rtype: None
    """
    t_ = type(value)
    if issubclass(t_, (BasedBase, DescriptorBase)):
        update_key = list(self._kwargs.keys())
        values = list(self._kwargs.values())
        # Update the internal dict
        new_key = value.unique_name
        update_key.insert(index, new_key)
        values.insert(index, value)
        self._kwargs.reorder(**{k: v for k, v in zip(update_key, values)})
        # ADD EDGE
        self._global_object.map.add_edge(self, value)
        self._global_object.map.reset_type(value, 'created_internal')
        value.interface = self.interface
    else:
        raise AttributeError('Only EasyScience objects can be put into an EasyScience group')

__getitem__

__getitem__(idx)

Get an item in the collection based on its index.

:param idx: index or slice of the collection. :type idx: Union[int, slice] :return: Object at index idx :rtype: Union[Parameter, Descriptor, ObjBase, 'CollectionBase']

Source code in src/easyscience/base_classes/collection_base.py
def __getitem__(self, idx: Union[int, slice]) -> Union[DescriptorBase, BasedBase]:
    """
    Get an item in the collection based on its index.

    :param idx: index or slice of the collection.
    :type idx: Union[int, slice]
    :return: Object at index `idx`
    :rtype: Union[Parameter, Descriptor, ObjBase, 'CollectionBase']
    """
    if isinstance(idx, slice):
        start, stop, step = idx.indices(len(self))
        return self.__class__(getattr(self, 'name'), *[self[i] for i in range(start, stop, step)])
    if str(idx) in self._kwargs.keys():
        return self._kwargs[str(idx)]
    if isinstance(idx, str):
        idx = [index for index, item in enumerate(self) if item.name == idx]
        noi = len(idx)
        if noi == 0:
            raise IndexError('Given index does not exist')
        elif noi == 1:
            idx = idx[0]
        else:
            return self.__class__(getattr(self, 'name'), *[self[i] for i in idx])
    elif not isinstance(idx, int) or isinstance(idx, bool):
        if isinstance(idx, bool):
            raise TypeError('Boolean indexing is not supported at the moment')
        try:
            if idx > len(self):
                raise IndexError(f'Given index {idx} is out of bounds')
        except TypeError:
            raise IndexError('Index must be of type `int`/`slice` or an item name (`str`)')
    keys = list(self._kwargs.keys())
    return self._kwargs[keys[idx]]

__setitem__

__setitem__(key, value)

Set an item via it's index.

:param key: Index in self. :type key: int :param value: Value which index key should be set to. :type value: Any

Source code in src/easyscience/base_classes/collection_base.py
def __setitem__(self, key: int, value: Union[BasedBase, DescriptorBase]) -> None:
    """
    Set an item via it's index.

    :param key: Index in self.
    :type key: int
    :param value: Value which index key should be set to.
    :type value: Any
    """
    if isinstance(value, Number):  # noqa: S3827
        item = self.__getitem__(key)
        item.value = value
    elif issubclass(type(value), (BasedBase, DescriptorBase)):
        update_key = list(self._kwargs.keys())
        values = list(self._kwargs.values())
        old_item = values[key]
        # Update the internal dict
        update_dict = {update_key[key]: value}
        self._kwargs.update(update_dict)
        # ADD EDGE
        self._global_object.map.add_edge(self, value)
        self._global_object.map.reset_type(value, 'created_internal')
        value.interface = self.interface
        # REMOVE EDGE
        self._global_object.map.prune_vertex_from_edge(self, old_item)
    else:
        raise NotImplementedError('At the moment only numerical values or EasyScience objects can be set.')

__delitem__

__delitem__(key)

Try to delete an idem by key.

:param key: :type key: :return: :rtype:

Source code in src/easyscience/base_classes/collection_base.py
def __delitem__(self, key: int) -> None:
    """
    Try to delete  an idem by key.

    :param key:
    :type key:
    :return:
    :rtype:
    """
    keys = list(self._kwargs.keys())
    item = self._kwargs[keys[key]]
    self._global_object.map.prune_vertex_from_edge(self, item)
    del self._kwargs[keys[key]]

__len__

__len__()

Get the number of items in this collection

:return: Number of items in this collection. :rtype: int

Source code in src/easyscience/base_classes/collection_base.py
def __len__(self) -> int:
    """
    Get the number of items in this collection

    :return: Number of items in this collection.
    :rtype: int
    """
    return len(self._kwargs.keys())

_convert_to_dict

_convert_to_dict(in_dict, encoder, skip=[], **kwargs)

Convert ones self into a serialized form.

:return: dictionary of ones self :rtype: dict

Source code in src/easyscience/base_classes/collection_base.py
def _convert_to_dict(self, in_dict, encoder, skip: List[str] = [], **kwargs) -> dict:
    """
    Convert ones self into a serialized form.

    :return: dictionary of ones self
    :rtype: dict
    """
    d = {}
    if hasattr(self, '_modify_dict'):
        # any extra keys defined on the inheriting class
        d = self._modify_dict(skip=skip, **kwargs)
    in_dict['data'] = [encoder._convert_to_dict(item, skip=skip, **kwargs) for item in self]
    out_dict = {**in_dict, **d}
    return out_dict

__repr__

__repr__()
Source code in src/easyscience/base_classes/collection_base.py
def __repr__(self) -> str:
    return f'{self.__class__.__name__} `{getattr(self, "name")}` of length {len(self)}'

sort

sort(mapping, reverse=False)

Sort the collection according to the given mapping.

:param mapping: mapping function to sort the collection. i.e. lambda parameter: parameter.value :type mapping: Callable :param reverse: Reverse the sorting. :type reverse: bool

Source code in src/easyscience/base_classes/collection_base.py
def sort(self, mapping: Callable[[Union[BasedBase, DescriptorBase]], Any], reverse: bool = False) -> None:
    """
    Sort the collection according to the given mapping.

    :param mapping: mapping function to sort the collection. i.e. lambda parameter: parameter.value
    :type mapping: Callable
    :param reverse: Reverse the sorting.
    :type reverse: bool
    """
    i = list(self._kwargs.items())
    i.sort(key=lambda x: mapping(x[1]), reverse=reverse)
    self._kwargs.reorder(**{k[0]: k[1] for k in i})

Mutable sequence container for scientific objects with automatic parameter tracking.

Fitting and Optimization

Fitter

easyscience.fitting.Fitter

Fitter is a class which makes it possible to undertake fitting utilizing one of the supported minimizers.

Source code in src/easyscience/fitting/fitter.py
class Fitter:
    """
    Fitter is a class which makes it possible to undertake fitting utilizing one of the supported minimizers.
    """

    def __init__(self, fit_object, fit_function: Callable):
        self._fit_object = fit_object
        self._fit_function = fit_function
        self._dependent_dims: int = None
        self._tolerance: float = None
        self._max_evaluations: int = None

        self._minimizer: MinimizerBase = None  # set in _update_minimizer
        self._enum_current_minimizer: AvailableMinimizers = None  # set in _update_minimizer
        self._update_minimizer(DEFAULT_MINIMIZER)

    def make_model(self, pars=None) -> Callable:
        return self._minimizer.make_model(pars)

    def evaluate(self, pars=None) -> np.ndarray:
        return self._minimizer.evaluate(pars)

    def convert_to_pars_obj(self, pars) -> object:
        return self._minimizer.convert_to_pars_obj(pars)

    # TODO: remove this method when we are ready to adjust the dependent products
    def initialize(self, fit_object, fit_function: Callable) -> None:
        """
        Set the model and callable in the calculator interface.

        :param fit_object: The EasyScience model object
        :param fit_function: The function to be optimized against.
        """
        self._fit_object = fit_object
        self._fit_function = fit_function
        self._update_minimizer(DEFAULT_MINIMIZER)

    # TODO: remove this method when we are ready to adjust the dependent products
    def create(self, minimizer_enum: Union[AvailableMinimizers, str] = DEFAULT_MINIMIZER) -> None:
        """
        Create the required minimizer.
        :param minimizer_enum: The enum of the minimization engine to create.
        """
        if isinstance(minimizer_enum, str):
            print(f'minimizer should be set with enum {minimizer_enum}')
            minimizer_enum = from_string_to_enum(minimizer_enum)
        self._update_minimizer(minimizer_enum)

    def switch_minimizer(self, minimizer_enum: Union[AvailableMinimizers, str]) -> None:
        """
        Switch minimizer and initialize.

        :param minimizer_enum: The enum of the minimizer to create and instantiate.
        """
        if isinstance(minimizer_enum, str):
            print(f'minimizer should be set with enum {minimizer_enum}')
            minimizer_enum = from_string_to_enum(minimizer_enum)

        self._update_minimizer(minimizer_enum)

    def _update_minimizer(self, minimizer_enum: AvailableMinimizers) -> None:
        self._minimizer = factory(minimizer_enum=minimizer_enum, fit_object=self._fit_object, fit_function=self.fit_function)
        self._enum_current_minimizer = minimizer_enum

    @property
    def available_minimizers(self) -> List[str]:
        """
        Get a list of the names of available fitting minimizers

        :return: List of available fitting minimizers
        :rtype: List[str]
        """
        return [minimize.name for minimize in AvailableMinimizers]

    @property
    def minimizer(self) -> MinimizerBase:
        """
        Get the current fitting minimizer object.

        :return:
        :rtype: MinimizerBase
        """
        return self._minimizer

    @property
    def tolerance(self) -> float:
        """
        Get the tolerance for the minimizer.

        :return: Tolerance for the minimizer
        """
        return self._tolerance

    @tolerance.setter
    def tolerance(self, tolerance: float) -> None:
        """
        Set the tolerance for the minimizer.

        :param tolerance: Tolerance for the minimizer
        """
        self._tolerance = tolerance

    @property
    def max_evaluations(self) -> int:
        """
        Get the maximal number of evaluations for the minimizer.

        :return: Maximal number of steps for the minimizer
        """
        return self._max_evaluations

    @max_evaluations.setter
    def max_evaluations(self, max_evaluations: int) -> None:
        """
        Set the maximal number of evaluations for the minimizer.

        :param max_evaluations: Maximal number of steps for the minimizer
        """
        self._max_evaluations = max_evaluations

    @property
    def fit_function(self) -> Callable:
        """
        The raw fit function that the optimizer will call (no wrapping)
        :return: Raw fit function
        """
        return self._fit_function

    @fit_function.setter
    def fit_function(self, fit_function: Callable) -> None:
        """
        Set the raw fit function to a new one.
        :param fit_function: New fit function
        :return: None
        """
        self._fit_function = fit_function
        self._update_minimizer(self._enum_current_minimizer)

    @property
    def fit_object(self):
        """
        The EasyScience object which will be used as a model
        :return: EasyScience Model
        """
        return self._fit_object

    @fit_object.setter
    def fit_object(self, fit_object) -> None:
        """
        Set the EasyScience object which wil be used as a model
        :param fit_object: New EasyScience object
        :return: None
        """
        self._fit_object = fit_object
        self._update_minimizer(self._enum_current_minimizer)

    def _fit_function_wrapper(self, real_x=None, flatten: bool = True) -> Callable:
        """
        Simple fit function which injects the real X (independent) values into the
        optimizer function. This will also flatten the results if needed.
        :param real_x: Independent x parameters to be injected
        :param flatten: Should the result be a flat 1D array?
        :return: Wrapped optimizer function.
        """
        fun = self._fit_function

        @functools.wraps(fun)
        def wrapped_fit_function(x, **kwargs):
            if real_x is not None:
                x = real_x
            dependent = fun(x, **kwargs)
            if flatten:
                dependent = dependent.flatten()
            return dependent

        return wrapped_fit_function

    @property
    def fit(self) -> Callable:
        """
        Property which wraps the current `fit` function from the fitting interface. This property return a wrapped fit
        function which converts the input data into the correct shape for the optimizer, wraps the fit function to
        re-constitute the independent variables and once the fit is completed, reshape the inputs to those expected.
        """

        @functools.wraps(self._minimizer.fit)
        def inner_fit_callable(
            x: np.ndarray,
            y: np.ndarray,
            weights: Optional[np.ndarray] = None,
            vectorized: bool = False,
            **kwargs,
        ) -> FitResults:
            """
            This is a wrapped callable which performs the actual fitting. It is split into
            3 sections, PRE/ FIT/ POST.
            - PRE = Reshaping the input data into the correct dimensions for the optimizer
            - FIT = Wrapping the fit function and performing the fit
            - POST = Reshaping the outputs so it is coherent with the inputs.
            """
            # Precompute - Reshape all independents into the correct dimensionality
            x_fit, x_new, y_new, weights, dims = self._precompute_reshaping(x, y, weights, vectorized)
            self._dependent_dims = dims

            # Fit
            fit_fun_org = self._fit_function
            fit_fun_wrap = self._fit_function_wrapper(x_new, flatten=True)  # This should be wrapped.
            self.fit_function = fit_fun_wrap
            f_res = self._minimizer.fit(
                x_fit,
                y_new,
                weights=weights,
                tolerance=self._tolerance,
                max_evaluations=self._max_evaluations,
                **kwargs,
            )

            # Postcompute
            fit_result = self._post_compute_reshaping(f_res, x, y)
            # Reset the function
            self.fit_function = fit_fun_org
            return fit_result

        return inner_fit_callable

    @staticmethod
    def _precompute_reshaping(
        x: np.ndarray,
        y: np.ndarray,
        weights: Optional[np.ndarray],
        vectorized: bool,
    ):
        """
        Check the dimensions of the inputs and reshape if necessary.
        :param x: ND matrix of dependent points
        :param y: N-1D matrix of independent points
        :param kwargs: Additional key-word arguments
        :return:
        """
        # Make sure that they are np arrays
        x_new = np.array(x)
        y_new = np.array(y)
        # Get the shape
        x_shape = x_new.shape
        # Check if the x data is 1D
        if len(x_shape) > 1:
            # It is ND data
            # Check if the data is vectorized. i.e. should x be [NxMx...x Ndims]
            if vectorized:
                # Assert that the shapes are the same
                if np.all(x_shape[:-1] != y_new.shape):
                    raise ValueError('The shape of the x and y data must be the same')
                # If so do nothing but note that the data is vectorized
                # x_shape = (-1,) # Should this be done?
            else:
                # Assert that the shapes are the same
                if np.prod(x_new.shape[:-1]) != y_new.size:
                    raise ValueError('The number of elements in x and y data must be the same')
                # Reshape the data to be [len(NxMx..), Ndims] i.e. flatten to columns
                x_new = x_new.reshape(-1, x_shape[-1], order='F')
        else:
            # Assert that the shapes are the same
            if np.all(x_shape != y_new.shape):
                raise ValueError('The shape of the x and y data must be the same')
            # It is 1D data
            x_new = x.flatten()
        # The optimizer needs a 1D array, flatten the y data
        y_new = y_new.flatten()
        if weights is not None:
            weights = np.array(weights).flatten()
        # Make a 'dummy' x array for the fit function
        x_for_fit = np.array(range(y_new.size))
        return x_for_fit, x_new, y_new, weights, x_shape

    @staticmethod
    def _post_compute_reshaping(fit_result: FitResults, x: np.ndarray, y: np.ndarray) -> FitResults:
        """
        Reshape the output of the fitter into the correct dimensions.
        :param fit_result: Output from the fitter
        :param x: Input x independent
        :param y: Input y dependent
        :return: Reshaped Fit Results
        """
        fit_result.x = x
        fit_result.y_obs = y
        fit_result.y_calc = np.reshape(fit_result.y_calc, y.shape)
        fit_result.y_err = np.reshape(fit_result.y_err, y.shape)
        return fit_result

_fit_object instance-attribute

_fit_object = fit_object

_fit_function instance-attribute

_fit_function = fit_function

_dependent_dims instance-attribute

_dependent_dims = None

_tolerance instance-attribute

_tolerance = None

_max_evaluations instance-attribute

_max_evaluations = None

_minimizer instance-attribute

_minimizer = None

_enum_current_minimizer instance-attribute

_enum_current_minimizer = None

available_minimizers property

available_minimizers

Get a list of the names of available fitting minimizers

:return: List of available fitting minimizers :rtype: List[str]

minimizer property

minimizer

Get the current fitting minimizer object.

:return: :rtype: MinimizerBase

tolerance property writable

tolerance

Get the tolerance for the minimizer.

:return: Tolerance for the minimizer

max_evaluations property writable

max_evaluations

Get the maximal number of evaluations for the minimizer.

:return: Maximal number of steps for the minimizer

fit_function property writable

fit_function

The raw fit function that the optimizer will call (no wrapping) :return: Raw fit function

fit_object property writable

fit_object

The EasyScience object which will be used as a model :return: EasyScience Model

fit property

fit

Property which wraps the current fit function from the fitting interface. This property return a wrapped fit function which converts the input data into the correct shape for the optimizer, wraps the fit function to re-constitute the independent variables and once the fit is completed, reshape the inputs to those expected.

__init__

__init__(fit_object, fit_function)
Source code in src/easyscience/fitting/fitter.py
def __init__(self, fit_object, fit_function: Callable):
    self._fit_object = fit_object
    self._fit_function = fit_function
    self._dependent_dims: int = None
    self._tolerance: float = None
    self._max_evaluations: int = None

    self._minimizer: MinimizerBase = None  # set in _update_minimizer
    self._enum_current_minimizer: AvailableMinimizers = None  # set in _update_minimizer
    self._update_minimizer(DEFAULT_MINIMIZER)

make_model

make_model(pars=None)
Source code in src/easyscience/fitting/fitter.py
def make_model(self, pars=None) -> Callable:
    return self._minimizer.make_model(pars)

evaluate

evaluate(pars=None)
Source code in src/easyscience/fitting/fitter.py
def evaluate(self, pars=None) -> np.ndarray:
    return self._minimizer.evaluate(pars)

convert_to_pars_obj

convert_to_pars_obj(pars)
Source code in src/easyscience/fitting/fitter.py
def convert_to_pars_obj(self, pars) -> object:
    return self._minimizer.convert_to_pars_obj(pars)

initialize

initialize(fit_object, fit_function)

Set the model and callable in the calculator interface.

:param fit_object: The EasyScience model object :param fit_function: The function to be optimized against.

Source code in src/easyscience/fitting/fitter.py
def initialize(self, fit_object, fit_function: Callable) -> None:
    """
    Set the model and callable in the calculator interface.

    :param fit_object: The EasyScience model object
    :param fit_function: The function to be optimized against.
    """
    self._fit_object = fit_object
    self._fit_function = fit_function
    self._update_minimizer(DEFAULT_MINIMIZER)

create

create(minimizer_enum=DEFAULT_MINIMIZER)

Create the required minimizer. :param minimizer_enum: The enum of the minimization engine to create.

Source code in src/easyscience/fitting/fitter.py
def create(self, minimizer_enum: Union[AvailableMinimizers, str] = DEFAULT_MINIMIZER) -> None:
    """
    Create the required minimizer.
    :param minimizer_enum: The enum of the minimization engine to create.
    """
    if isinstance(minimizer_enum, str):
        print(f'minimizer should be set with enum {minimizer_enum}')
        minimizer_enum = from_string_to_enum(minimizer_enum)
    self._update_minimizer(minimizer_enum)

switch_minimizer

switch_minimizer(minimizer_enum)

Switch minimizer and initialize.

:param minimizer_enum: The enum of the minimizer to create and instantiate.

Source code in src/easyscience/fitting/fitter.py
def switch_minimizer(self, minimizer_enum: Union[AvailableMinimizers, str]) -> None:
    """
    Switch minimizer and initialize.

    :param minimizer_enum: The enum of the minimizer to create and instantiate.
    """
    if isinstance(minimizer_enum, str):
        print(f'minimizer should be set with enum {minimizer_enum}')
        minimizer_enum = from_string_to_enum(minimizer_enum)

    self._update_minimizer(minimizer_enum)

_update_minimizer

_update_minimizer(minimizer_enum)
Source code in src/easyscience/fitting/fitter.py
def _update_minimizer(self, minimizer_enum: AvailableMinimizers) -> None:
    self._minimizer = factory(minimizer_enum=minimizer_enum, fit_object=self._fit_object, fit_function=self.fit_function)
    self._enum_current_minimizer = minimizer_enum

_fit_function_wrapper

_fit_function_wrapper(real_x=None, flatten=True)

Simple fit function which injects the real X (independent) values into the optimizer function. This will also flatten the results if needed. :param real_x: Independent x parameters to be injected :param flatten: Should the result be a flat 1D array? :return: Wrapped optimizer function.

Source code in src/easyscience/fitting/fitter.py
def _fit_function_wrapper(self, real_x=None, flatten: bool = True) -> Callable:
    """
    Simple fit function which injects the real X (independent) values into the
    optimizer function. This will also flatten the results if needed.
    :param real_x: Independent x parameters to be injected
    :param flatten: Should the result be a flat 1D array?
    :return: Wrapped optimizer function.
    """
    fun = self._fit_function

    @functools.wraps(fun)
    def wrapped_fit_function(x, **kwargs):
        if real_x is not None:
            x = real_x
        dependent = fun(x, **kwargs)
        if flatten:
            dependent = dependent.flatten()
        return dependent

    return wrapped_fit_function

_precompute_reshaping staticmethod

_precompute_reshaping(x, y, weights, vectorized)

Check the dimensions of the inputs and reshape if necessary. :param x: ND matrix of dependent points :param y: N-1D matrix of independent points :param kwargs: Additional key-word arguments :return:

Source code in src/easyscience/fitting/fitter.py
@staticmethod
def _precompute_reshaping(
    x: np.ndarray,
    y: np.ndarray,
    weights: Optional[np.ndarray],
    vectorized: bool,
):
    """
    Check the dimensions of the inputs and reshape if necessary.
    :param x: ND matrix of dependent points
    :param y: N-1D matrix of independent points
    :param kwargs: Additional key-word arguments
    :return:
    """
    # Make sure that they are np arrays
    x_new = np.array(x)
    y_new = np.array(y)
    # Get the shape
    x_shape = x_new.shape
    # Check if the x data is 1D
    if len(x_shape) > 1:
        # It is ND data
        # Check if the data is vectorized. i.e. should x be [NxMx...x Ndims]
        if vectorized:
            # Assert that the shapes are the same
            if np.all(x_shape[:-1] != y_new.shape):
                raise ValueError('The shape of the x and y data must be the same')
            # If so do nothing but note that the data is vectorized
            # x_shape = (-1,) # Should this be done?
        else:
            # Assert that the shapes are the same
            if np.prod(x_new.shape[:-1]) != y_new.size:
                raise ValueError('The number of elements in x and y data must be the same')
            # Reshape the data to be [len(NxMx..), Ndims] i.e. flatten to columns
            x_new = x_new.reshape(-1, x_shape[-1], order='F')
    else:
        # Assert that the shapes are the same
        if np.all(x_shape != y_new.shape):
            raise ValueError('The shape of the x and y data must be the same')
        # It is 1D data
        x_new = x.flatten()
    # The optimizer needs a 1D array, flatten the y data
    y_new = y_new.flatten()
    if weights is not None:
        weights = np.array(weights).flatten()
    # Make a 'dummy' x array for the fit function
    x_for_fit = np.array(range(y_new.size))
    return x_for_fit, x_new, y_new, weights, x_shape

_post_compute_reshaping staticmethod

_post_compute_reshaping(fit_result, x, y)

Reshape the output of the fitter into the correct dimensions. :param fit_result: Output from the fitter :param x: Input x independent :param y: Input y dependent :return: Reshaped Fit Results

Source code in src/easyscience/fitting/fitter.py
@staticmethod
def _post_compute_reshaping(fit_result: FitResults, x: np.ndarray, y: np.ndarray) -> FitResults:
    """
    Reshape the output of the fitter into the correct dimensions.
    :param fit_result: Output from the fitter
    :param x: Input x independent
    :param y: Input y dependent
    :return: Reshaped Fit Results
    """
    fit_result.x = x
    fit_result.y_obs = y
    fit_result.y_calc = np.reshape(fit_result.y_calc, y.shape)
    fit_result.y_err = np.reshape(fit_result.y_err, y.shape)
    return fit_result

Main fitting engine supporting multiple optimization backends.

Available Minimizers

easyscience.fitting.AvailableMinimizers dataclass

Bases: AvailableMinimizer, Enum

Source code in src/easyscience/fitting/available_minimizers.py
class AvailableMinimizers(AvailableMinimizer, Enum):
    if lmfit_engine_available:
        LMFit = 'lm', 'leastsq', 11
        LMFit_leastsq = 'lm', 'leastsq', 12
        LMFit_powell = 'lm', 'powell', 13
        LMFit_cobyla = 'lm', 'cobyla', 14
        LMFit_differential_evolution = 'lm', 'differential_evolution', 15
        LMFit_scipy_least_squares = 'lm', 'least_squares', 16

    if bumps_engine_available:
        Bumps = 'bumps', 'amoeba', 21
        Bumps_simplex = 'bumps', 'amoeba', 22
        Bumps_newton = 'bumps', 'newton', 23
        Bumps_lm = 'bumps', 'lm', 24

    if dfo_engine_available:
        DFO = 'dfo', 'leastsq', 31
        DFO_leastsq = 'dfo', 'leastsq', 32

LMFit class-attribute instance-attribute

LMFit = ('lm', 'leastsq', 11)

LMFit_leastsq class-attribute instance-attribute

LMFit_leastsq = ('lm', 'leastsq', 12)

LMFit_powell class-attribute instance-attribute

LMFit_powell = ('lm', 'powell', 13)

LMFit_cobyla class-attribute instance-attribute

LMFit_cobyla = ('lm', 'cobyla', 14)

LMFit_differential_evolution class-attribute instance-attribute

LMFit_differential_evolution = (
    'lm',
    'differential_evolution',
    15,
)

LMFit_scipy_least_squares class-attribute instance-attribute

LMFit_scipy_least_squares = ('lm', 'least_squares', 16)

Bumps class-attribute instance-attribute

Bumps = ('bumps', 'amoeba', 21)

Bumps_simplex class-attribute instance-attribute

Bumps_simplex = ('bumps', 'amoeba', 22)

Bumps_newton class-attribute instance-attribute

Bumps_newton = ('bumps', 'newton', 23)

Bumps_lm class-attribute instance-attribute

Bumps_lm = ('bumps', 'lm', 24)

DFO class-attribute instance-attribute

DFO = ('dfo', 'leastsq', 31)

DFO_leastsq class-attribute instance-attribute

DFO_leastsq = ('dfo', 'leastsq', 32)

Enumeration of available optimization backends.

Fit Results

easyscience.fitting.FitResults

At the moment this is just a dummy way of unifying the returned fit parameters.

Source code in src/easyscience/fitting/minimizers/utils.py
class FitResults:
    """
    At the moment this is just a dummy way of unifying the returned fit parameters.
    """

    __slots__ = [
        'success',
        'minimizer_engine',
        'fit_args',
        'p',
        'p0',
        'x',
        'x_matrices',
        'y_obs',
        'y_calc',
        'y_err',
        'engine_result',
        'total_results',
    ]

    def __init__(self):
        self.success = False
        self.minimizer_engine = None
        self.fit_args = {}
        self.p = {}
        self.p0 = {}
        self.x = np.ndarray([])
        self.x_matrices = np.ndarray([])
        self.y_obs = np.ndarray([])
        self.y_calc = np.ndarray([])
        self.y_err = np.ndarray([])
        self.engine_result = None
        self.total_results = None

    @property
    def n_pars(self):
        return len(self.p)

    @property
    def residual(self):
        return self.y_obs - self.y_calc

    @property
    def chi2(self):
        return ((self.residual / self.y_err) ** 2).sum()

    @property
    def reduced_chi(self):
        return self.chi2 / (len(self.x) - self.n_pars)

__slots__ class-attribute instance-attribute

__slots__ = [
    'success',
    'minimizer_engine',
    'fit_args',
    'p',
    'p0',
    'x',
    'x_matrices',
    'y_obs',
    'y_calc',
    'y_err',
    'engine_result',
    'total_results',
]

success instance-attribute

success = False

minimizer_engine instance-attribute

minimizer_engine = None

fit_args instance-attribute

fit_args = {}

p instance-attribute

p = {}

p0 instance-attribute

p0 = {}

x instance-attribute

x = ndarray([])

x_matrices instance-attribute

x_matrices = ndarray([])

y_obs instance-attribute

y_obs = ndarray([])

y_calc instance-attribute

y_calc = ndarray([])

y_err instance-attribute

y_err = ndarray([])

engine_result instance-attribute

engine_result = None

total_results instance-attribute

total_results = None

n_pars property

n_pars

residual property

residual

chi2 property

chi2

reduced_chi property

reduced_chi

__init__

__init__()
Source code in src/easyscience/fitting/minimizers/utils.py
def __init__(self):
    self.success = False
    self.minimizer_engine = None
    self.fit_args = {}
    self.p = {}
    self.p0 = {}
    self.x = np.ndarray([])
    self.x_matrices = np.ndarray([])
    self.y_obs = np.ndarray([])
    self.y_calc = np.ndarray([])
    self.y_err = np.ndarray([])
    self.engine_result = None
    self.total_results = None

Container for fitting results including parameters, statistics, and diagnostics.

Minimizer Base Classes

easyscience.fitting.minimizers.MinimizerBase

This template class is the basis for all minimizer engines in EasyScience.

Source code in src/easyscience/fitting/minimizers/minimizer_base.py
class MinimizerBase(metaclass=ABCMeta):
    """
    This template class is the basis for all minimizer engines in `EasyScience`.
    """

    package: str = None

    def __init__(
        self,
        obj,  #: ObjBase,
        fit_function: Callable,
        minimizer_enum: AvailableMinimizers,
    ):  # todo after constraint changes, add type hint: obj: ObjBase  # noqa: E501
        if minimizer_enum.method not in self.supported_methods():
            raise FitError(f'Method {minimizer_enum.method} not available in {self.__class__}')
        self._object = obj
        self._original_fit_function = fit_function
        self._minimizer_enum = minimizer_enum
        self._method = minimizer_enum.method
        self._cached_pars: Dict[str, Parameter] = {}
        self._cached_pars_vals: Dict[str, Tuple[float]] = {}
        self._cached_model = None
        self._fit_function = None

    @property
    def enum(self) -> AvailableMinimizers:
        return self._minimizer_enum

    @property
    def name(self) -> str:
        return self._minimizer_enum.name

    @abstractmethod
    def fit(
        self,
        x: np.ndarray,
        y: np.ndarray,
        weights: np.ndarray,
        model: Optional[Callable] = None,
        parameters: Optional[Parameter] = None,
        method: Optional[str] = None,
        tolerance: Optional[float] = None,
        max_evaluations: Optional[int] = None,
        **kwargs,
    ) -> FitResults:
        """
        Perform a fit using the  engine.

        :param x: points to be calculated at
        :type x: np.ndarray
        :param y: measured points
        :type y: np.ndarray
        :param weights: Weights for supplied measured points
        :type weights: np.ndarray
        :param model: Optional Model which is being fitted to
        :param parameters: Optional parameters for the fit
        :param method: method for the minimizer to use.
        :type method: str
        :param kwargs: Additional arguments for the fitting function.
        :return: Fit results
        """

    def evaluate(self, x: np.ndarray, minimizer_parameters: Optional[dict[str, float]] = None, **kwargs) -> np.ndarray:
        """
        Evaluate the fit function for values of x. Parameters used are either the latest or user supplied.
        If the parameters are user supplied, it must be in a dictionary of {'parameter_name': parameter_value,...}.

        :param x: x values for which the fit function will be evaluated
        :type x:  np.ndarray
        :param minimizer_parameters: Dictionary of parameters which will be used in the fit function. They must be in a dictionary
         of {'parameter_name': parameter_value,...}
        :type minimizer_parameters: dict
        :param kwargs: additional arguments
        :return: y values calculated at points x for a set of parameters.
        :rtype: np.ndarray
        """  # noqa: E501
        if minimizer_parameters is None:
            minimizer_parameters = {}
        if not isinstance(minimizer_parameters, dict):
            raise TypeError('minimizer_parameters must be a dictionary')

        if self._fit_function is None:
            # This will also generate self._cached_pars
            self._fit_function = self._generate_fit_function()

        minimizer_parameters = self._prepare_parameters(minimizer_parameters)

        return self._fit_function(x, **minimizer_parameters, **kwargs)

    def _get_method_kwargs(self, passed_method: Optional[str] = None) -> dict[str, str]:
        if passed_method is not None:
            if passed_method not in self.supported_methods():
                raise FitError(f'Method {passed_method} not available in {self.__class__}')
            return {'method': passed_method}

        if self._method is not None:
            return {'method': self._method}

        return {}

    @abstractmethod
    def convert_to_pars_obj(self, par_list: Optional[Union[list]] = None):
        """
        Create an engine compatible container with the `Parameters` converted from the base object.

        :param par_list: If only a single/selection of parameter is required. Specify as a list
        :type par_list: List[str]
        :return: engine Parameters compatible object
        """

    @staticmethod
    @abstractmethod
    def supported_methods() -> List[str]:
        """
        Return a list of supported methods for the minimizer.

        :return: List of supported methods
        :rtype: List[str]
        """

    @staticmethod
    @abstractmethod
    def all_methods() -> List[str]:
        """
        Return a list of all available methods for the minimizer.

        :return: List of all available methods
        :rtype: List[str]
        """

    @staticmethod
    @abstractmethod
    def convert_to_par_object(obj):  # todo after constraint changes, add type hint: obj: ObjBase
        """
        Convert an `EasyScience.variable.Parameter` object to an engine Parameter object.
        """

    def _prepare_parameters(self, parameters: dict[str, float]) -> dict[str, float]:
        """
        Prepare the parameters for the minimizer.

        :param parameters: Dict of parameters for the minimizer with names as keys.
        """
        pars = self._cached_pars

        for name, item in pars.items():
            parameter_name = MINIMIZER_PARAMETER_PREFIX + str(name)
            if parameter_name not in parameters.keys():
                parameters[parameter_name] = item.value
        return parameters

    def _generate_fit_function(self) -> Callable:
        """
        Using the user supplied `fit_function`, wrap it in such a way we can update `Parameter` on
        iterations.

        :return: a fit function which is compatible with bumps models
        """
        # Original fit function
        func = self._original_fit_function
        # Get a list of `Parameters`
        self._cached_pars = {}
        self._cached_pars_vals = {}
        for parameter in self._object.get_fit_parameters():
            key = parameter.unique_name
            self._cached_pars[key] = parameter
            self._cached_pars_vals[key] = (parameter.value, parameter.error)

        # Make a new fit function
        def _fit_function(x: np.ndarray, **kwargs):
            """
            Wrapped fit function which now has an EasyScience compatible form

            :param x: array of data points to be calculated
            :type x: np.ndarray
            :param kwargs: key word arguments
            :return: points calculated at `x`
            :rtype: np.ndarray
            """
            # Update the `Parameter` values and the callback if needed
            # TODO THIS IS NOT THREAD SAFE :-(

            for name, value in kwargs.items():
                par_name = name[1:]
                if par_name in self._cached_pars.keys():
                    # This will take into account constraints
                    if self._cached_pars[par_name].value != value:
                        self._cached_pars[par_name].value = value

                    # Since we are calling the parameter fset will be called.
            # TODO Pre processing here
            return_data = func(x)
            # TODO Loading or manipulating data here
            return return_data

        _fit_function.__signature__ = self._create_signature(self._cached_pars)
        return _fit_function

    @staticmethod
    def _create_signature(parameters: Dict[int, Parameter]) -> Signature:
        """
        Wrap the function signature.
        This is done as lmfit wants the function to be in the form:
        f = (x, a=1, b=2)...
        Where we need to be generic. Note that this won't hold for much outside of this scope.
        """
        wrapped_parameters = []
        wrapped_parameters.append(InspectParameter('x', InspectParameter.POSITIONAL_OR_KEYWORD, annotation=_empty))

        for name, parameter in parameters.items():
            default_value = parameter.value

            wrapped_parameters.append(
                InspectParameter(
                    MINIMIZER_PARAMETER_PREFIX + str(name),
                    InspectParameter.POSITIONAL_OR_KEYWORD,
                    annotation=_empty,
                    default=default_value,
                )
            )
        return Signature(wrapped_parameters)

    @staticmethod
    def _error_from_jacobian(jacobian: np.ndarray, residuals: np.ndarray, confidence: float = 0.95) -> np.ndarray:
        from scipy import stats

        JtJi = np.linalg.inv(np.dot(jacobian.T, jacobian))
        # 1.96 is a 95% confidence value
        error_matrix = np.dot(
            JtJi,
            np.dot(jacobian.T, np.dot(np.diag(residuals**2), np.dot(jacobian, JtJi))),
        )

        z = 1 - ((1 - confidence) / 2)
        z = stats.norm.pdf(z)
        error_matrix = z * np.sqrt(error_matrix)
        return error_matrix

package class-attribute instance-attribute

package = None

_object instance-attribute

_object = obj

_original_fit_function instance-attribute

_original_fit_function = fit_function

_minimizer_enum instance-attribute

_minimizer_enum = minimizer_enum

_method instance-attribute

_method = method

_cached_pars instance-attribute

_cached_pars = {}

_cached_pars_vals instance-attribute

_cached_pars_vals = {}

_cached_model instance-attribute

_cached_model = None

_fit_function instance-attribute

_fit_function = None

enum property

enum

name property

name

__init__

__init__(obj, fit_function, minimizer_enum)
Source code in src/easyscience/fitting/minimizers/minimizer_base.py
def __init__(
    self,
    obj,  #: ObjBase,
    fit_function: Callable,
    minimizer_enum: AvailableMinimizers,
):  # todo after constraint changes, add type hint: obj: ObjBase  # noqa: E501
    if minimizer_enum.method not in self.supported_methods():
        raise FitError(f'Method {minimizer_enum.method} not available in {self.__class__}')
    self._object = obj
    self._original_fit_function = fit_function
    self._minimizer_enum = minimizer_enum
    self._method = minimizer_enum.method
    self._cached_pars: Dict[str, Parameter] = {}
    self._cached_pars_vals: Dict[str, Tuple[float]] = {}
    self._cached_model = None
    self._fit_function = None

fit abstractmethod

fit(
    x,
    y,
    weights,
    model=None,
    parameters=None,
    method=None,
    tolerance=None,
    max_evaluations=None,
    **kwargs,
)

Perform a fit using the engine.

:param x: points to be calculated at :type x: np.ndarray :param y: measured points :type y: np.ndarray :param weights: Weights for supplied measured points :type weights: np.ndarray :param model: Optional Model which is being fitted to :param parameters: Optional parameters for the fit :param method: method for the minimizer to use. :type method: str :param kwargs: Additional arguments for the fitting function. :return: Fit results

Source code in src/easyscience/fitting/minimizers/minimizer_base.py
@abstractmethod
def fit(
    self,
    x: np.ndarray,
    y: np.ndarray,
    weights: np.ndarray,
    model: Optional[Callable] = None,
    parameters: Optional[Parameter] = None,
    method: Optional[str] = None,
    tolerance: Optional[float] = None,
    max_evaluations: Optional[int] = None,
    **kwargs,
) -> FitResults:
    """
    Perform a fit using the  engine.

    :param x: points to be calculated at
    :type x: np.ndarray
    :param y: measured points
    :type y: np.ndarray
    :param weights: Weights for supplied measured points
    :type weights: np.ndarray
    :param model: Optional Model which is being fitted to
    :param parameters: Optional parameters for the fit
    :param method: method for the minimizer to use.
    :type method: str
    :param kwargs: Additional arguments for the fitting function.
    :return: Fit results
    """

evaluate

evaluate(x, minimizer_parameters=None, **kwargs)

Evaluate the fit function for values of x. Parameters used are either the latest or user supplied. If the parameters are user supplied, it must be in a dictionary of {'parameter_name': parameter_value,...}.

:param x: x values for which the fit function will be evaluated :type x: np.ndarray :param minimizer_parameters: Dictionary of parameters which will be used in the fit function. They must be in a dictionary of {'parameter_name': parameter_value,...} :type minimizer_parameters: dict :param kwargs: additional arguments :return: y values calculated at points x for a set of parameters. :rtype: np.ndarray

Source code in src/easyscience/fitting/minimizers/minimizer_base.py
def evaluate(self, x: np.ndarray, minimizer_parameters: Optional[dict[str, float]] = None, **kwargs) -> np.ndarray:
    """
    Evaluate the fit function for values of x. Parameters used are either the latest or user supplied.
    If the parameters are user supplied, it must be in a dictionary of {'parameter_name': parameter_value,...}.

    :param x: x values for which the fit function will be evaluated
    :type x:  np.ndarray
    :param minimizer_parameters: Dictionary of parameters which will be used in the fit function. They must be in a dictionary
     of {'parameter_name': parameter_value,...}
    :type minimizer_parameters: dict
    :param kwargs: additional arguments
    :return: y values calculated at points x for a set of parameters.
    :rtype: np.ndarray
    """  # noqa: E501
    if minimizer_parameters is None:
        minimizer_parameters = {}
    if not isinstance(minimizer_parameters, dict):
        raise TypeError('minimizer_parameters must be a dictionary')

    if self._fit_function is None:
        # This will also generate self._cached_pars
        self._fit_function = self._generate_fit_function()

    minimizer_parameters = self._prepare_parameters(minimizer_parameters)

    return self._fit_function(x, **minimizer_parameters, **kwargs)

_get_method_kwargs

_get_method_kwargs(passed_method=None)
Source code in src/easyscience/fitting/minimizers/minimizer_base.py
def _get_method_kwargs(self, passed_method: Optional[str] = None) -> dict[str, str]:
    if passed_method is not None:
        if passed_method not in self.supported_methods():
            raise FitError(f'Method {passed_method} not available in {self.__class__}')
        return {'method': passed_method}

    if self._method is not None:
        return {'method': self._method}

    return {}

convert_to_pars_obj abstractmethod

convert_to_pars_obj(par_list=None)

Create an engine compatible container with the Parameters converted from the base object.

:param par_list: If only a single/selection of parameter is required. Specify as a list :type par_list: List[str] :return: engine Parameters compatible object

Source code in src/easyscience/fitting/minimizers/minimizer_base.py
@abstractmethod
def convert_to_pars_obj(self, par_list: Optional[Union[list]] = None):
    """
    Create an engine compatible container with the `Parameters` converted from the base object.

    :param par_list: If only a single/selection of parameter is required. Specify as a list
    :type par_list: List[str]
    :return: engine Parameters compatible object
    """

supported_methods abstractmethod staticmethod

supported_methods()

Return a list of supported methods for the minimizer.

:return: List of supported methods :rtype: List[str]

Source code in src/easyscience/fitting/minimizers/minimizer_base.py
@staticmethod
@abstractmethod
def supported_methods() -> List[str]:
    """
    Return a list of supported methods for the minimizer.

    :return: List of supported methods
    :rtype: List[str]
    """

all_methods abstractmethod staticmethod

all_methods()

Return a list of all available methods for the minimizer.

:return: List of all available methods :rtype: List[str]

Source code in src/easyscience/fitting/minimizers/minimizer_base.py
@staticmethod
@abstractmethod
def all_methods() -> List[str]:
    """
    Return a list of all available methods for the minimizer.

    :return: List of all available methods
    :rtype: List[str]
    """

convert_to_par_object abstractmethod staticmethod

convert_to_par_object(obj)

Convert an EasyScience.variable.Parameter object to an engine Parameter object.

Source code in src/easyscience/fitting/minimizers/minimizer_base.py
@staticmethod
@abstractmethod
def convert_to_par_object(obj):  # todo after constraint changes, add type hint: obj: ObjBase
    """
    Convert an `EasyScience.variable.Parameter` object to an engine Parameter object.
    """

_prepare_parameters

_prepare_parameters(parameters)

Prepare the parameters for the minimizer.

:param parameters: Dict of parameters for the minimizer with names as keys.

Source code in src/easyscience/fitting/minimizers/minimizer_base.py
def _prepare_parameters(self, parameters: dict[str, float]) -> dict[str, float]:
    """
    Prepare the parameters for the minimizer.

    :param parameters: Dict of parameters for the minimizer with names as keys.
    """
    pars = self._cached_pars

    for name, item in pars.items():
        parameter_name = MINIMIZER_PARAMETER_PREFIX + str(name)
        if parameter_name not in parameters.keys():
            parameters[parameter_name] = item.value
    return parameters

_generate_fit_function

_generate_fit_function()

Using the user supplied fit_function, wrap it in such a way we can update Parameter on iterations.

:return: a fit function which is compatible with bumps models

Source code in src/easyscience/fitting/minimizers/minimizer_base.py
def _generate_fit_function(self) -> Callable:
    """
    Using the user supplied `fit_function`, wrap it in such a way we can update `Parameter` on
    iterations.

    :return: a fit function which is compatible with bumps models
    """
    # Original fit function
    func = self._original_fit_function
    # Get a list of `Parameters`
    self._cached_pars = {}
    self._cached_pars_vals = {}
    for parameter in self._object.get_fit_parameters():
        key = parameter.unique_name
        self._cached_pars[key] = parameter
        self._cached_pars_vals[key] = (parameter.value, parameter.error)

    # Make a new fit function
    def _fit_function(x: np.ndarray, **kwargs):
        """
        Wrapped fit function which now has an EasyScience compatible form

        :param x: array of data points to be calculated
        :type x: np.ndarray
        :param kwargs: key word arguments
        :return: points calculated at `x`
        :rtype: np.ndarray
        """
        # Update the `Parameter` values and the callback if needed
        # TODO THIS IS NOT THREAD SAFE :-(

        for name, value in kwargs.items():
            par_name = name[1:]
            if par_name in self._cached_pars.keys():
                # This will take into account constraints
                if self._cached_pars[par_name].value != value:
                    self._cached_pars[par_name].value = value

                # Since we are calling the parameter fset will be called.
        # TODO Pre processing here
        return_data = func(x)
        # TODO Loading or manipulating data here
        return return_data

    _fit_function.__signature__ = self._create_signature(self._cached_pars)
    return _fit_function

_create_signature staticmethod

_create_signature(parameters)

Wrap the function signature. This is done as lmfit wants the function to be in the form: f = (x, a=1, b=2)... Where we need to be generic. Note that this won't hold for much outside of this scope.

Source code in src/easyscience/fitting/minimizers/minimizer_base.py
@staticmethod
def _create_signature(parameters: Dict[int, Parameter]) -> Signature:
    """
    Wrap the function signature.
    This is done as lmfit wants the function to be in the form:
    f = (x, a=1, b=2)...
    Where we need to be generic. Note that this won't hold for much outside of this scope.
    """
    wrapped_parameters = []
    wrapped_parameters.append(InspectParameter('x', InspectParameter.POSITIONAL_OR_KEYWORD, annotation=_empty))

    for name, parameter in parameters.items():
        default_value = parameter.value

        wrapped_parameters.append(
            InspectParameter(
                MINIMIZER_PARAMETER_PREFIX + str(name),
                InspectParameter.POSITIONAL_OR_KEYWORD,
                annotation=_empty,
                default=default_value,
            )
        )
    return Signature(wrapped_parameters)

_error_from_jacobian staticmethod

_error_from_jacobian(jacobian, residuals, confidence=0.95)
Source code in src/easyscience/fitting/minimizers/minimizer_base.py
@staticmethod
def _error_from_jacobian(jacobian: np.ndarray, residuals: np.ndarray, confidence: float = 0.95) -> np.ndarray:
    from scipy import stats

    JtJi = np.linalg.inv(np.dot(jacobian.T, jacobian))
    # 1.96 is a 95% confidence value
    error_matrix = np.dot(
        JtJi,
        np.dot(jacobian.T, np.dot(np.diag(residuals**2), np.dot(jacobian, JtJi))),
    )

    z = 1 - ((1 - confidence) / 2)
    z = stats.norm.pdf(z)
    error_matrix = z * np.sqrt(error_matrix)
    return error_matrix

Abstract base class for all minimizer implementations.

easyscience.fitting.minimizers.LMFit

Bases: MinimizerBase

This is a wrapper to the extended Levenberg-Marquardt Fit: https://lmfit.github.io/lmfit-py/ It allows for the lmfit fitting engine to use parameters declared in an EasyScience.base_classes.ObjBase.

Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
class LMFit(MinimizerBase):  # noqa: S101
    """
    This is a wrapper to the extended Levenberg-Marquardt Fit: https://lmfit.github.io/lmfit-py/
    It allows for the lmfit fitting engine to use parameters declared in an `EasyScience.base_classes.ObjBase`.
    """

    package = 'lmfit'

    def __init__(
        self,
        obj,  #: ObjBase,
        fit_function: Callable,
        minimizer_enum: Optional[AvailableMinimizers] = None,
    ):  # todo after constraint changes, add type hint: obj: ObjBase  # noqa: E501
        """
        Initialize the minimizer with the `ObjBase` and the `fit_function` to be used.

        :param obj: Base object which contains the parameters to be fitted
        :type obj: ObjBase
        :param fit_function: Function which will be fitted to the data
        :type fit_function: Callable
        :param method: Method to be used by the minimizer
        :type method: str
        """
        super().__init__(obj=obj, fit_function=fit_function, minimizer_enum=minimizer_enum)

    @staticmethod
    def all_methods() -> List[str]:
        return [
            'least_squares',
            'leastsq',
            'differential_evolution',
            'basinhopping',
            'ampgo',
            'nelder',
            'lbfgsb',
            'powell',
            'cg',
            'newton',
            'cobyla',
            'bfgs',
        ]

    @staticmethod
    def supported_methods() -> List[str]:
        return [
            'least_squares',
            'leastsq',
            'differential_evolution',
            'powell',
            'cobyla',
        ]

    def fit(
        self,
        x: np.ndarray,
        y: np.ndarray,
        weights: np.ndarray = None,
        model: Optional[LMModel] = None,
        parameters: Optional[LMParameters] = None,
        method: Optional[str] = None,
        tolerance: Optional[float] = None,
        max_evaluations: Optional[int] = None,
        minimizer_kwargs: Optional[dict] = None,
        engine_kwargs: Optional[dict] = None,
        **kwargs,
    ) -> FitResults:
        """
        Perform a fit using the lmfit engine.

        :param method:
        :type method:
        :param x: points to be calculated at
        :type x: np.ndarray
        :param y: measured points
        :type y: np.ndarray
        :param weights: Weights for supplied measured points
        :type weights: np.ndarray
        :param model: Optional Model which is being fitted to
        :type model: LMModel
        :param parameters: Optional parameters for the fit
        :type parameters: LMParameters
        :param minimizer_kwargs: Arguments to be passed directly to the minimizer
        :type minimizer_kwargs: dict
        :param kwargs: Additional arguments for the fitting function.
        :return: Fit results
        :rtype: ModelResult
        """
        x, y, weights = np.asarray(x), np.asarray(y), np.asarray(weights)

        if y.shape != x.shape:
            raise ValueError('x and y must have the same shape.')

        if weights.shape != x.shape:
            raise ValueError('Weights must have the same shape as x and y.')

        if not np.isfinite(weights).all():
            raise ValueError('Weights cannot be NaN or infinite.')

        if (weights <= 0).any():
            raise ValueError('Weights must be strictly positive and non-zero.')

        if engine_kwargs is None:
            engine_kwargs = {}

        method_kwargs = self._get_method_kwargs(method)
        fit_kws_dict = self._get_fit_kws(method, tolerance, minimizer_kwargs)

        # Why do we do this? Because a fitting template has to have global_object instantiated outside pre-runtime
        from easyscience import global_object

        stack_status = global_object.stack.enabled
        global_object.stack.enabled = False

        try:
            if model is None:
                model = self._make_model()

            model_results = model.fit(
                y,
                x=x,
                weights=weights,
                max_nfev=max_evaluations,
                fit_kws=fit_kws_dict,
                **method_kwargs,
                **engine_kwargs,
                **kwargs,
            )
            self._set_parameter_fit_result(model_results, stack_status)
            results = self._gen_fit_results(model_results)
        except Exception as e:
            for key in self._cached_pars.keys():
                self._cached_pars[key].value = self._cached_pars_vals[key][0]
            raise FitError(e)
        return results

    def _get_fit_kws(self, method: str, tolerance: float, minimizer_kwargs: dict[str:str]) -> dict[str:str]:
        if minimizer_kwargs is None:
            minimizer_kwargs = {}
        if tolerance is not None:
            if method in [None, 'least_squares', 'leastsq']:
                minimizer_kwargs['ftol'] = tolerance
            if method in ['differential_evolution', 'powell', 'cobyla']:
                minimizer_kwargs['tol'] = tolerance
        return minimizer_kwargs

    def convert_to_pars_obj(self, parameters: Optional[List[Parameter]] = None) -> LMParameters:
        """
        Create an lmfit compatible container with the `Parameters` converted from the base object.

        :param parameters: If only a single/selection of parameter is required. Specify as a list
        :return: lmfit Parameters compatible object
        """
        if parameters is None:
            # Assume that we have a ObjBase for which we can obtain a list
            parameters = self._object.get_fit_parameters()
        lm_parameters = LMParameters().add_many([self.convert_to_par_object(parameter) for parameter in parameters])
        return lm_parameters

    @staticmethod
    def convert_to_par_object(parameter: Parameter) -> LMParameter:
        """
        Convert an EasyScience Parameter object to a lmfit Parameter object.

        :return: lmfit Parameter compatible object.
        :rtype: LMParameter
        """
        value = parameter.value

        return LMParameter(
            MINIMIZER_PARAMETER_PREFIX + parameter.unique_name,
            value=value,
            vary=not parameter.fixed,
            min=parameter.min,
            max=parameter.max,
            expr=None,
            brute_step=None,
        )

    def _make_model(self, pars: Optional[LMParameters] = None) -> LMModel:
        """
        Generate a lmfit model from the supplied `fit_function` and parameters in the base object.

        :return: Callable lmfit model
        :rtype: LMModel
        """
        # Generate the fitting function
        fit_func = self._generate_fit_function()

        self._fit_function = fit_func

        if pars is None:
            pars = self._cached_pars
        # Create the model
        model = LMModel(
            fit_func,
            independent_vars=['x'],
            param_names=[MINIMIZER_PARAMETER_PREFIX + str(key) for key in pars.keys()],
        )
        # Assign values from the `Parameter` to the model
        for name, item in pars.items():
            if isinstance(item, LMParameter):
                value = item.value
            else:
                value = item.value

            model.set_param_hint(MINIMIZER_PARAMETER_PREFIX + str(name), value=value, min=item.min, max=item.max)

        # Cache the model for later reference
        self._cached_model = model
        return model

    def _set_parameter_fit_result(self, fit_result: ModelResult, stack_status: bool):
        """
        Update parameters to their final values and assign a std error to them.

        :param fit_result: Fit object which contains info on the fit
        :return: None
        :rtype: noneType
        """
        from easyscience import global_object

        pars = self._cached_pars
        if stack_status:
            for name in pars.keys():
                pars[name].value = self._cached_pars_vals[name][0]
                pars[name].error = self._cached_pars_vals[name][1]
            global_object.stack.enabled = True
            global_object.stack.beginMacro('Fitting routine')
        for name in pars.keys():
            pars[name].value = fit_result.params[MINIMIZER_PARAMETER_PREFIX + str(name)].value
            if fit_result.errorbars:
                pars[name].error = fit_result.params[MINIMIZER_PARAMETER_PREFIX + str(name)].stderr
            else:
                pars[name].error = 0.0
        if stack_status:
            global_object.stack.endMacro()

    def _gen_fit_results(self, fit_results: ModelResult, **kwargs) -> FitResults:
        """
        Convert fit results into the unified `FitResults` format.
        See https://github.com/lmfit/lmfit-py/blob/480072b9f7834b31ff2ca66277a5ad31246843a4/lmfit/model.py#L1272

        :param fit_result: Fit object which contains info on the fit
        :return: fit results container
        :rtype: FitResults
        """
        results = FitResults()
        for name, value in kwargs.items():
            if getattr(results, name, False):
                setattr(results, name, value)

        # We need to unify return codes......
        results.success = fit_results.success
        results.y_obs = fit_results.data
        # results.residual = fit_results.residual
        results.x = fit_results.userkws['x']
        results.p = fit_results.values
        results.p0 = fit_results.init_values
        # results.goodness_of_fit = fit_results.chisqr
        results.y_calc = fit_results.best_fit
        results.y_err = 1 / fit_results.weights
        results.minimizer_engine = self.__class__
        results.fit_args = None

        results.engine_result = fit_results
        # results.check_sanity()
        return results

package class-attribute instance-attribute

package = 'lmfit'

__init__

__init__(obj, fit_function, minimizer_enum=None)

Initialize the minimizer with the ObjBase and the fit_function to be used.

:param obj: Base object which contains the parameters to be fitted :type obj: ObjBase :param fit_function: Function which will be fitted to the data :type fit_function: Callable :param method: Method to be used by the minimizer :type method: str

Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
def __init__(
    self,
    obj,  #: ObjBase,
    fit_function: Callable,
    minimizer_enum: Optional[AvailableMinimizers] = None,
):  # todo after constraint changes, add type hint: obj: ObjBase  # noqa: E501
    """
    Initialize the minimizer with the `ObjBase` and the `fit_function` to be used.

    :param obj: Base object which contains the parameters to be fitted
    :type obj: ObjBase
    :param fit_function: Function which will be fitted to the data
    :type fit_function: Callable
    :param method: Method to be used by the minimizer
    :type method: str
    """
    super().__init__(obj=obj, fit_function=fit_function, minimizer_enum=minimizer_enum)

all_methods staticmethod

all_methods()
Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
@staticmethod
def all_methods() -> List[str]:
    return [
        'least_squares',
        'leastsq',
        'differential_evolution',
        'basinhopping',
        'ampgo',
        'nelder',
        'lbfgsb',
        'powell',
        'cg',
        'newton',
        'cobyla',
        'bfgs',
    ]

supported_methods staticmethod

supported_methods()
Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
@staticmethod
def supported_methods() -> List[str]:
    return [
        'least_squares',
        'leastsq',
        'differential_evolution',
        'powell',
        'cobyla',
    ]

fit

fit(
    x,
    y,
    weights=None,
    model=None,
    parameters=None,
    method=None,
    tolerance=None,
    max_evaluations=None,
    minimizer_kwargs=None,
    engine_kwargs=None,
    **kwargs,
)

Perform a fit using the lmfit engine.

:param method: :type method: :param x: points to be calculated at :type x: np.ndarray :param y: measured points :type y: np.ndarray :param weights: Weights for supplied measured points :type weights: np.ndarray :param model: Optional Model which is being fitted to :type model: LMModel :param parameters: Optional parameters for the fit :type parameters: LMParameters :param minimizer_kwargs: Arguments to be passed directly to the minimizer :type minimizer_kwargs: dict :param kwargs: Additional arguments for the fitting function. :return: Fit results :rtype: ModelResult

Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
def fit(
    self,
    x: np.ndarray,
    y: np.ndarray,
    weights: np.ndarray = None,
    model: Optional[LMModel] = None,
    parameters: Optional[LMParameters] = None,
    method: Optional[str] = None,
    tolerance: Optional[float] = None,
    max_evaluations: Optional[int] = None,
    minimizer_kwargs: Optional[dict] = None,
    engine_kwargs: Optional[dict] = None,
    **kwargs,
) -> FitResults:
    """
    Perform a fit using the lmfit engine.

    :param method:
    :type method:
    :param x: points to be calculated at
    :type x: np.ndarray
    :param y: measured points
    :type y: np.ndarray
    :param weights: Weights for supplied measured points
    :type weights: np.ndarray
    :param model: Optional Model which is being fitted to
    :type model: LMModel
    :param parameters: Optional parameters for the fit
    :type parameters: LMParameters
    :param minimizer_kwargs: Arguments to be passed directly to the minimizer
    :type minimizer_kwargs: dict
    :param kwargs: Additional arguments for the fitting function.
    :return: Fit results
    :rtype: ModelResult
    """
    x, y, weights = np.asarray(x), np.asarray(y), np.asarray(weights)

    if y.shape != x.shape:
        raise ValueError('x and y must have the same shape.')

    if weights.shape != x.shape:
        raise ValueError('Weights must have the same shape as x and y.')

    if not np.isfinite(weights).all():
        raise ValueError('Weights cannot be NaN or infinite.')

    if (weights <= 0).any():
        raise ValueError('Weights must be strictly positive and non-zero.')

    if engine_kwargs is None:
        engine_kwargs = {}

    method_kwargs = self._get_method_kwargs(method)
    fit_kws_dict = self._get_fit_kws(method, tolerance, minimizer_kwargs)

    # Why do we do this? Because a fitting template has to have global_object instantiated outside pre-runtime
    from easyscience import global_object

    stack_status = global_object.stack.enabled
    global_object.stack.enabled = False

    try:
        if model is None:
            model = self._make_model()

        model_results = model.fit(
            y,
            x=x,
            weights=weights,
            max_nfev=max_evaluations,
            fit_kws=fit_kws_dict,
            **method_kwargs,
            **engine_kwargs,
            **kwargs,
        )
        self._set_parameter_fit_result(model_results, stack_status)
        results = self._gen_fit_results(model_results)
    except Exception as e:
        for key in self._cached_pars.keys():
            self._cached_pars[key].value = self._cached_pars_vals[key][0]
        raise FitError(e)
    return results

_get_fit_kws

_get_fit_kws(method, tolerance, minimizer_kwargs)
Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
def _get_fit_kws(self, method: str, tolerance: float, minimizer_kwargs: dict[str:str]) -> dict[str:str]:
    if minimizer_kwargs is None:
        minimizer_kwargs = {}
    if tolerance is not None:
        if method in [None, 'least_squares', 'leastsq']:
            minimizer_kwargs['ftol'] = tolerance
        if method in ['differential_evolution', 'powell', 'cobyla']:
            minimizer_kwargs['tol'] = tolerance
    return minimizer_kwargs

convert_to_pars_obj

convert_to_pars_obj(parameters=None)

Create an lmfit compatible container with the Parameters converted from the base object.

:param parameters: If only a single/selection of parameter is required. Specify as a list :return: lmfit Parameters compatible object

Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
def convert_to_pars_obj(self, parameters: Optional[List[Parameter]] = None) -> LMParameters:
    """
    Create an lmfit compatible container with the `Parameters` converted from the base object.

    :param parameters: If only a single/selection of parameter is required. Specify as a list
    :return: lmfit Parameters compatible object
    """
    if parameters is None:
        # Assume that we have a ObjBase for which we can obtain a list
        parameters = self._object.get_fit_parameters()
    lm_parameters = LMParameters().add_many([self.convert_to_par_object(parameter) for parameter in parameters])
    return lm_parameters

convert_to_par_object staticmethod

convert_to_par_object(parameter)

Convert an EasyScience Parameter object to a lmfit Parameter object.

:return: lmfit Parameter compatible object. :rtype: LMParameter

Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
@staticmethod
def convert_to_par_object(parameter: Parameter) -> LMParameter:
    """
    Convert an EasyScience Parameter object to a lmfit Parameter object.

    :return: lmfit Parameter compatible object.
    :rtype: LMParameter
    """
    value = parameter.value

    return LMParameter(
        MINIMIZER_PARAMETER_PREFIX + parameter.unique_name,
        value=value,
        vary=not parameter.fixed,
        min=parameter.min,
        max=parameter.max,
        expr=None,
        brute_step=None,
    )

_make_model

_make_model(pars=None)

Generate a lmfit model from the supplied fit_function and parameters in the base object.

:return: Callable lmfit model :rtype: LMModel

Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
def _make_model(self, pars: Optional[LMParameters] = None) -> LMModel:
    """
    Generate a lmfit model from the supplied `fit_function` and parameters in the base object.

    :return: Callable lmfit model
    :rtype: LMModel
    """
    # Generate the fitting function
    fit_func = self._generate_fit_function()

    self._fit_function = fit_func

    if pars is None:
        pars = self._cached_pars
    # Create the model
    model = LMModel(
        fit_func,
        independent_vars=['x'],
        param_names=[MINIMIZER_PARAMETER_PREFIX + str(key) for key in pars.keys()],
    )
    # Assign values from the `Parameter` to the model
    for name, item in pars.items():
        if isinstance(item, LMParameter):
            value = item.value
        else:
            value = item.value

        model.set_param_hint(MINIMIZER_PARAMETER_PREFIX + str(name), value=value, min=item.min, max=item.max)

    # Cache the model for later reference
    self._cached_model = model
    return model

_set_parameter_fit_result

_set_parameter_fit_result(fit_result, stack_status)

Update parameters to their final values and assign a std error to them.

:param fit_result: Fit object which contains info on the fit :return: None :rtype: noneType

Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
def _set_parameter_fit_result(self, fit_result: ModelResult, stack_status: bool):
    """
    Update parameters to their final values and assign a std error to them.

    :param fit_result: Fit object which contains info on the fit
    :return: None
    :rtype: noneType
    """
    from easyscience import global_object

    pars = self._cached_pars
    if stack_status:
        for name in pars.keys():
            pars[name].value = self._cached_pars_vals[name][0]
            pars[name].error = self._cached_pars_vals[name][1]
        global_object.stack.enabled = True
        global_object.stack.beginMacro('Fitting routine')
    for name in pars.keys():
        pars[name].value = fit_result.params[MINIMIZER_PARAMETER_PREFIX + str(name)].value
        if fit_result.errorbars:
            pars[name].error = fit_result.params[MINIMIZER_PARAMETER_PREFIX + str(name)].stderr
        else:
            pars[name].error = 0.0
    if stack_status:
        global_object.stack.endMacro()

_gen_fit_results

_gen_fit_results(fit_results, **kwargs)

Convert fit results into the unified FitResults format. See https://github.com/lmfit/lmfit-py/blob/480072b9f7834b31ff2ca66277a5ad31246843a4/lmfit/model.py#L1272

:param fit_result: Fit object which contains info on the fit :return: fit results container :rtype: FitResults

Source code in src/easyscience/fitting/minimizers/minimizer_lmfit.py
def _gen_fit_results(self, fit_results: ModelResult, **kwargs) -> FitResults:
    """
    Convert fit results into the unified `FitResults` format.
    See https://github.com/lmfit/lmfit-py/blob/480072b9f7834b31ff2ca66277a5ad31246843a4/lmfit/model.py#L1272

    :param fit_result: Fit object which contains info on the fit
    :return: fit results container
    :rtype: FitResults
    """
    results = FitResults()
    for name, value in kwargs.items():
        if getattr(results, name, False):
            setattr(results, name, value)

    # We need to unify return codes......
    results.success = fit_results.success
    results.y_obs = fit_results.data
    # results.residual = fit_results.residual
    results.x = fit_results.userkws['x']
    results.p = fit_results.values
    results.p0 = fit_results.init_values
    # results.goodness_of_fit = fit_results.chisqr
    results.y_calc = fit_results.best_fit
    results.y_err = 1 / fit_results.weights
    results.minimizer_engine = self.__class__
    results.fit_args = None

    results.engine_result = fit_results
    # results.check_sanity()
    return results

LMFit-based minimizer implementation.

easyscience.fitting.minimizers.Bumps

Bases: MinimizerBase

This is a wrapper to Bumps: https://bumps.readthedocs.io/ It allows for the Bumps fitting engine to use parameters declared in an EasyScience.base_classes.ObjBase.

Source code in src/easyscience/fitting/minimizers/minimizer_bumps.py
class Bumps(MinimizerBase):
    """
    This is a wrapper to Bumps: https://bumps.readthedocs.io/
    It allows for the Bumps fitting engine to use parameters declared in an `EasyScience.base_classes.ObjBase`.
    """

    package = 'bumps'

    def __init__(
        self,
        obj,  #: ObjBase,
        fit_function: Callable,
        minimizer_enum: Optional[AvailableMinimizers] = None,
    ):  # todo after constraint changes, add type hint: obj: ObjBase  # noqa: E501
        """
        Initialize the fitting engine with a `ObjBase` and an arbitrary fitting function.

        :param obj: Object containing elements of the `Parameter` class
        :type obj: ObjBase
        :param fit_function: function that when called returns y values. 'x' must be the first
                            and only positional argument. Additional values can be supplied by
                            keyword/value pairs
        :type fit_function: Callable
        """
        super().__init__(obj=obj, fit_function=fit_function, minimizer_enum=minimizer_enum)
        self._p_0 = {}

    @staticmethod
    def all_methods() -> List[str]:
        return FIT_AVAILABLE_IDS_FILTERED

    @staticmethod
    def supported_methods() -> List[str]:
        # only a small subset
        methods = ['amoeba', 'newton', 'lm']
        return methods

    def fit(
        self,
        x: np.ndarray,
        y: np.ndarray,
        weights: np.ndarray,
        model: Optional[Callable] = None,
        parameters: Optional[Parameter] = None,
        method: Optional[str] = None,
        tolerance: Optional[float] = None,
        max_evaluations: Optional[int] = None,
        minimizer_kwargs: Optional[dict] = None,
        engine_kwargs: Optional[dict] = None,
        **kwargs,
    ) -> FitResults:
        """
        Perform a fit using the lmfit engine.

        :param x: points to be calculated at
        :type x: np.ndarray
        :param y: measured points
        :type y: np.ndarray
        :param weights: Weights for supplied measured points
        :type weights: np.ndarray
        :param model: Optional Model which is being fitted to
        :type model: lmModel
        :param parameters: Optional parameters for the fit
        :type parameters: List[BumpsParameter]
        :param kwargs: Additional arguments for the fitting function.
        :param method: Method for minimization
        :type method: str
        :return: Fit results
        :rtype: ModelResult
        """
        method_dict = self._get_method_kwargs(method)

        x, y, weights = np.asarray(x), np.asarray(y), np.asarray(weights)

        if y.shape != x.shape:
            raise ValueError('x and y must have the same shape.')

        if weights.shape != x.shape:
            raise ValueError('Weights must have the same shape as x and y.')

        if not np.isfinite(weights).all():
            raise ValueError('Weights cannot be NaN or infinite.')

        if (weights <= 0).any():
            raise ValueError('Weights must be strictly positive and non-zero.')

        if engine_kwargs is None:
            engine_kwargs = {}

        if minimizer_kwargs is None:
            minimizer_kwargs = {}
        minimizer_kwargs.update(engine_kwargs)

        if tolerance is not None:
            minimizer_kwargs['ftol'] = tolerance  # tolerance for change in function value
            minimizer_kwargs['xtol'] = tolerance  # tolerance for change in parameter value, could be an independent value
        if max_evaluations is not None:
            minimizer_kwargs['steps'] = max_evaluations

        if model is None:
            model_function = self._make_model(parameters=parameters)
            model = model_function(x, y, weights)
        self._cached_model = model

        self._p_0 = {f'p{key}': self._cached_pars[key].value for key in self._cached_pars.keys()}

        problem = FitProblem(model)
        # Why do we do this? Because a fitting template has to have global_object instantiated outside pre-runtime
        from easyscience import global_object

        stack_status = global_object.stack.enabled
        global_object.stack.enabled = False

        try:
            model_results = bumps_fit(problem, **method_dict, **minimizer_kwargs, **kwargs)
            self._set_parameter_fit_result(model_results, stack_status, problem._parameters)
            results = self._gen_fit_results(model_results)
        except Exception as e:
            for key in self._cached_pars.keys():
                self._cached_pars[key].value = self._cached_pars_vals[key][0]
            raise FitError(e)
        return results

    def convert_to_pars_obj(self, par_list: Optional[List] = None) -> List[BumpsParameter]:
        """
        Create a container with the `Parameters` converted from the base object.

        :param par_list: If only a single/selection of parameter is required. Specify as a list
        :type par_list: List[str]
        :return: bumps Parameters list
        :rtype: List[BumpsParameter]
        """
        if par_list is None:
            # Assume that we have a ObjBase for which we can obtain a list
            par_list = self._object.get_fit_parameters()
        pars_obj = [self.__class__.convert_to_par_object(obj) for obj in par_list]
        return pars_obj

    # For some reason I have to double staticmethod :-/
    @staticmethod
    def convert_to_par_object(obj) -> BumpsParameter:
        """
        Convert an `EasyScience.variable.Parameter` object to a bumps Parameter object

        :return: bumps Parameter compatible object.
        :rtype: BumpsParameter
        """

        value = obj.value

        return BumpsParameter(
            name=MINIMIZER_PARAMETER_PREFIX + obj.unique_name,
            value=value,
            bounds=[obj.min, obj.max],
            fixed=obj.fixed,
        )

    def _make_model(self, parameters: Optional[List[BumpsParameter]] = None) -> Callable:
        """
        Generate a bumps model from the supplied `fit_function` and parameters in the base object.
        Note that this makes a callable as it needs to be initialized with *x*, *y*, *weights*

        :return: Callable to make a bumps Curve model
        :rtype: Callable
        """
        fit_func = self._generate_fit_function()

        def _outer(obj):
            def _make_func(x, y, weights):
                bumps_pars = {}
                if not parameters:
                    for name, par in obj._cached_pars.items():
                        bumps_pars[MINIMIZER_PARAMETER_PREFIX + str(name)] = obj.convert_to_par_object(par)
                else:
                    for par in parameters:
                        bumps_pars[MINIMIZER_PARAMETER_PREFIX + par.unique_name] = obj.convert_to_par_object(par)
                return Curve(fit_func, x, y, dy=weights, **bumps_pars)

            return _make_func

        return _outer(self)

    def _set_parameter_fit_result(self, fit_result, stack_status: bool, par_list: List[BumpsParameter]):
        """
        Update parameters to their final values and assign a std error to them.

        :param fit_result: Fit object which contains info on the fit
        :return: None
        :rtype: noneType
        """
        from easyscience import global_object

        pars = self._cached_pars

        if stack_status:
            for name in pars.keys():
                pars[name].value = self._cached_pars_vals[name][0]
                pars[name].error = self._cached_pars_vals[name][1]
            global_object.stack.enabled = True
            global_object.stack.beginMacro('Fitting routine')

        for index, name in enumerate([par.name for par in par_list]):
            dict_name = name[len(MINIMIZER_PARAMETER_PREFIX) :]
            pars[dict_name].value = fit_result.x[index]
            pars[dict_name].error = fit_result.dx[index]
        if stack_status:
            global_object.stack.endMacro()

    def _gen_fit_results(self, fit_results, **kwargs) -> FitResults:
        """
        Convert fit results into the unified `FitResults` format

        :param fit_result: Fit object which contains info on the fit
        :return: fit results container
        :rtype: FitResults
        """

        results = FitResults()
        for name, value in kwargs.items():
            if getattr(results, name, False):
                setattr(results, name, value)
        results.success = fit_results.success
        pars = self._cached_pars
        item = {}
        for index, name in enumerate(self._cached_model.pars.keys()):
            dict_name = name[len(MINIMIZER_PARAMETER_PREFIX) :]

            item[name] = pars[dict_name].value

        results.p0 = self._p_0
        results.p = item
        results.x = self._cached_model.x
        results.y_obs = self._cached_model.y
        results.y_calc = self.evaluate(results.x, minimizer_parameters=results.p)
        results.y_err = self._cached_model.dy
        # results.residual = results.y_obs - results.y_calc
        # results.goodness_of_fit = np.sum(results.residual**2)
        results.minimizer_engine = self.__class__
        results.fit_args = None
        results.engine_result = fit_results
        # results.check_sanity()
        return results

package class-attribute instance-attribute

package = 'bumps'

_p_0 instance-attribute

_p_0 = {}

__init__

__init__(obj, fit_function, minimizer_enum=None)

Initialize the fitting engine with a ObjBase and an arbitrary fitting function.

:param obj: Object containing elements of the Parameter class :type obj: ObjBase :param fit_function: function that when called returns y values. 'x' must be the first and only positional argument. Additional values can be supplied by keyword/value pairs :type fit_function: Callable

Source code in src/easyscience/fitting/minimizers/minimizer_bumps.py
def __init__(
    self,
    obj,  #: ObjBase,
    fit_function: Callable,
    minimizer_enum: Optional[AvailableMinimizers] = None,
):  # todo after constraint changes, add type hint: obj: ObjBase  # noqa: E501
    """
    Initialize the fitting engine with a `ObjBase` and an arbitrary fitting function.

    :param obj: Object containing elements of the `Parameter` class
    :type obj: ObjBase
    :param fit_function: function that when called returns y values. 'x' must be the first
                        and only positional argument. Additional values can be supplied by
                        keyword/value pairs
    :type fit_function: Callable
    """
    super().__init__(obj=obj, fit_function=fit_function, minimizer_enum=minimizer_enum)
    self._p_0 = {}

all_methods staticmethod

all_methods()
Source code in src/easyscience/fitting/minimizers/minimizer_bumps.py
@staticmethod
def all_methods() -> List[str]:
    return FIT_AVAILABLE_IDS_FILTERED

supported_methods staticmethod

supported_methods()
Source code in src/easyscience/fitting/minimizers/minimizer_bumps.py
@staticmethod
def supported_methods() -> List[str]:
    # only a small subset
    methods = ['amoeba', 'newton', 'lm']
    return methods

fit

fit(
    x,
    y,
    weights,
    model=None,
    parameters=None,
    method=None,
    tolerance=None,
    max_evaluations=None,
    minimizer_kwargs=None,
    engine_kwargs=None,
    **kwargs,
)

Perform a fit using the lmfit engine.

:param x: points to be calculated at :type x: np.ndarray :param y: measured points :type y: np.ndarray :param weights: Weights for supplied measured points :type weights: np.ndarray :param model: Optional Model which is being fitted to :type model: lmModel :param parameters: Optional parameters for the fit :type parameters: List[BumpsParameter] :param kwargs: Additional arguments for the fitting function. :param method: Method for minimization :type method: str :return: Fit results :rtype: ModelResult

Source code in src/easyscience/fitting/minimizers/minimizer_bumps.py
def fit(
    self,
    x: np.ndarray,
    y: np.ndarray,
    weights: np.ndarray,
    model: Optional[Callable] = None,
    parameters: Optional[Parameter] = None,
    method: Optional[str] = None,
    tolerance: Optional[float] = None,
    max_evaluations: Optional[int] = None,
    minimizer_kwargs: Optional[dict] = None,
    engine_kwargs: Optional[dict] = None,
    **kwargs,
) -> FitResults:
    """
    Perform a fit using the lmfit engine.

    :param x: points to be calculated at
    :type x: np.ndarray
    :param y: measured points
    :type y: np.ndarray
    :param weights: Weights for supplied measured points
    :type weights: np.ndarray
    :param model: Optional Model which is being fitted to
    :type model: lmModel
    :param parameters: Optional parameters for the fit
    :type parameters: List[BumpsParameter]
    :param kwargs: Additional arguments for the fitting function.
    :param method: Method for minimization
    :type method: str
    :return: Fit results
    :rtype: ModelResult
    """
    method_dict = self._get_method_kwargs(method)

    x, y, weights = np.asarray(x), np.asarray(y), np.asarray(weights)

    if y.shape != x.shape:
        raise ValueError('x and y must have the same shape.')

    if weights.shape != x.shape:
        raise ValueError('Weights must have the same shape as x and y.')

    if not np.isfinite(weights).all():
        raise ValueError('Weights cannot be NaN or infinite.')

    if (weights <= 0).any():
        raise ValueError('Weights must be strictly positive and non-zero.')

    if engine_kwargs is None:
        engine_kwargs = {}

    if minimizer_kwargs is None:
        minimizer_kwargs = {}
    minimizer_kwargs.update(engine_kwargs)

    if tolerance is not None:
        minimizer_kwargs['ftol'] = tolerance  # tolerance for change in function value
        minimizer_kwargs['xtol'] = tolerance  # tolerance for change in parameter value, could be an independent value
    if max_evaluations is not None:
        minimizer_kwargs['steps'] = max_evaluations

    if model is None:
        model_function = self._make_model(parameters=parameters)
        model = model_function(x, y, weights)
    self._cached_model = model

    self._p_0 = {f'p{key}': self._cached_pars[key].value for key in self._cached_pars.keys()}

    problem = FitProblem(model)
    # Why do we do this? Because a fitting template has to have global_object instantiated outside pre-runtime
    from easyscience import global_object

    stack_status = global_object.stack.enabled
    global_object.stack.enabled = False

    try:
        model_results = bumps_fit(problem, **method_dict, **minimizer_kwargs, **kwargs)
        self._set_parameter_fit_result(model_results, stack_status, problem._parameters)
        results = self._gen_fit_results(model_results)
    except Exception as e:
        for key in self._cached_pars.keys():
            self._cached_pars[key].value = self._cached_pars_vals[key][0]
        raise FitError(e)
    return results

convert_to_pars_obj

convert_to_pars_obj(par_list=None)

Create a container with the Parameters converted from the base object.

:param par_list: If only a single/selection of parameter is required. Specify as a list :type par_list: List[str] :return: bumps Parameters list :rtype: List[BumpsParameter]

Source code in src/easyscience/fitting/minimizers/minimizer_bumps.py
def convert_to_pars_obj(self, par_list: Optional[List] = None) -> List[BumpsParameter]:
    """
    Create a container with the `Parameters` converted from the base object.

    :param par_list: If only a single/selection of parameter is required. Specify as a list
    :type par_list: List[str]
    :return: bumps Parameters list
    :rtype: List[BumpsParameter]
    """
    if par_list is None:
        # Assume that we have a ObjBase for which we can obtain a list
        par_list = self._object.get_fit_parameters()
    pars_obj = [self.__class__.convert_to_par_object(obj) for obj in par_list]
    return pars_obj

convert_to_par_object staticmethod

convert_to_par_object(obj)

Convert an EasyScience.variable.Parameter object to a bumps Parameter object

:return: bumps Parameter compatible object. :rtype: BumpsParameter

Source code in src/easyscience/fitting/minimizers/minimizer_bumps.py
@staticmethod
def convert_to_par_object(obj) -> BumpsParameter:
    """
    Convert an `EasyScience.variable.Parameter` object to a bumps Parameter object

    :return: bumps Parameter compatible object.
    :rtype: BumpsParameter
    """

    value = obj.value

    return BumpsParameter(
        name=MINIMIZER_PARAMETER_PREFIX + obj.unique_name,
        value=value,
        bounds=[obj.min, obj.max],
        fixed=obj.fixed,
    )

_make_model

_make_model(parameters=None)

Generate a bumps model from the supplied fit_function and parameters in the base object. Note that this makes a callable as it needs to be initialized with x, y, weights

:return: Callable to make a bumps Curve model :rtype: Callable

Source code in src/easyscience/fitting/minimizers/minimizer_bumps.py
def _make_model(self, parameters: Optional[List[BumpsParameter]] = None) -> Callable:
    """
    Generate a bumps model from the supplied `fit_function` and parameters in the base object.
    Note that this makes a callable as it needs to be initialized with *x*, *y*, *weights*

    :return: Callable to make a bumps Curve model
    :rtype: Callable
    """
    fit_func = self._generate_fit_function()

    def _outer(obj):
        def _make_func(x, y, weights):
            bumps_pars = {}
            if not parameters:
                for name, par in obj._cached_pars.items():
                    bumps_pars[MINIMIZER_PARAMETER_PREFIX + str(name)] = obj.convert_to_par_object(par)
            else:
                for par in parameters:
                    bumps_pars[MINIMIZER_PARAMETER_PREFIX + par.unique_name] = obj.convert_to_par_object(par)
            return Curve(fit_func, x, y, dy=weights, **bumps_pars)

        return _make_func

    return _outer(self)

_set_parameter_fit_result

_set_parameter_fit_result(
    fit_result, stack_status, par_list
)

Update parameters to their final values and assign a std error to them.

:param fit_result: Fit object which contains info on the fit :return: None :rtype: noneType

Source code in src/easyscience/fitting/minimizers/minimizer_bumps.py
def _set_parameter_fit_result(self, fit_result, stack_status: bool, par_list: List[BumpsParameter]):
    """
    Update parameters to their final values and assign a std error to them.

    :param fit_result: Fit object which contains info on the fit
    :return: None
    :rtype: noneType
    """
    from easyscience import global_object

    pars = self._cached_pars

    if stack_status:
        for name in pars.keys():
            pars[name].value = self._cached_pars_vals[name][0]
            pars[name].error = self._cached_pars_vals[name][1]
        global_object.stack.enabled = True
        global_object.stack.beginMacro('Fitting routine')

    for index, name in enumerate([par.name for par in par_list]):
        dict_name = name[len(MINIMIZER_PARAMETER_PREFIX) :]
        pars[dict_name].value = fit_result.x[index]
        pars[dict_name].error = fit_result.dx[index]
    if stack_status:
        global_object.stack.endMacro()

_gen_fit_results

_gen_fit_results(fit_results, **kwargs)

Convert fit results into the unified FitResults format

:param fit_result: Fit object which contains info on the fit :return: fit results container :rtype: FitResults

Source code in src/easyscience/fitting/minimizers/minimizer_bumps.py
def _gen_fit_results(self, fit_results, **kwargs) -> FitResults:
    """
    Convert fit results into the unified `FitResults` format

    :param fit_result: Fit object which contains info on the fit
    :return: fit results container
    :rtype: FitResults
    """

    results = FitResults()
    for name, value in kwargs.items():
        if getattr(results, name, False):
            setattr(results, name, value)
    results.success = fit_results.success
    pars = self._cached_pars
    item = {}
    for index, name in enumerate(self._cached_model.pars.keys()):
        dict_name = name[len(MINIMIZER_PARAMETER_PREFIX) :]

        item[name] = pars[dict_name].value

    results.p0 = self._p_0
    results.p = item
    results.x = self._cached_model.x
    results.y_obs = self._cached_model.y
    results.y_calc = self.evaluate(results.x, minimizer_parameters=results.p)
    results.y_err = self._cached_model.dy
    # results.residual = results.y_obs - results.y_calc
    # results.goodness_of_fit = np.sum(results.residual**2)
    results.minimizer_engine = self.__class__
    results.fit_args = None
    results.engine_result = fit_results
    # results.check_sanity()
    return results

Bumps-based minimizer implementation.

easyscience.fitting.minimizers.DFO

Bases: MinimizerBase

This is a wrapper to Derivative Free Optimisation for Least Square: https://numericalalgorithmsgroup.github.io/dfols/

Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
class DFO(MinimizerBase):
    """
    This is a wrapper to Derivative Free Optimisation for Least Square: https://numericalalgorithmsgroup.github.io/dfols/
    """

    package = 'dfo'

    def __init__(
        self,
        obj,  #: ObjBase,
        fit_function: Callable,
        minimizer_enum: Optional[AvailableMinimizers] = None,
    ):  # todo after constraint changes, add type hint: obj: ObjBase  # noqa: E501
        """
        Initialize the fitting engine with a `ObjBase` and an arbitrary fitting function.

        :param obj: Object containing elements of the `Parameter` class
        :type obj: ObjBase
        :param fit_function: function that when called returns y values. 'x' must be the first
                            and only positional argument. Additional values can be supplied by
                            keyword/value pairs
        :type fit_function: Callable
        """
        super().__init__(obj=obj, fit_function=fit_function, minimizer_enum=minimizer_enum)
        self._p_0 = {}

    @staticmethod
    def supported_methods() -> List[str]:
        return ['leastsq']

    @staticmethod
    def all_methods() -> List[str]:
        return ['leastsq']

    def fit(
        self,
        x: np.ndarray,
        y: np.ndarray,
        weights: np.ndarray,
        model: Optional[Callable] = None,
        parameters: Optional[List[Parameter]] = None,
        method: str = None,
        tolerance: Optional[float] = None,
        max_evaluations: Optional[int] = None,
        **kwargs,
    ) -> FitResults:
        """
        Perform a fit using the DFO-ls engine.

        :param x: points to be calculated at
        :type x: np.ndarray
        :param y: measured points
        :type y: np.ndarray
        :param weights: Weights for supplied measured points
        :type weights: np.ndarray
        :param model: Optional Model which is being fitted to
        :type model: lmModel
        :param parameters: Optional parameters for the fit
        :type parameters: List[bumpsParameter]
        :param kwargs: Additional arguments for the fitting function.
        :param method: Method for minimization
        :type method: str
        :return: Fit results
        :rtype: ModelResult
        """
        x, y, weights = np.asarray(x), np.asarray(y), np.asarray(weights)

        if y.shape != x.shape:
            raise ValueError('x and y must have the same shape.')

        if weights.shape != x.shape:
            raise ValueError('Weights must have the same shape as x and y.')

        if not np.isfinite(weights).all():
            raise ValueError('Weights cannot be NaN or infinite.')

        if (weights <= 0).any():
            raise ValueError('Weights must be strictly positive and non-zero.')

        if model is None:
            model_function = self._make_model(parameters=parameters)
            model = model_function(x, y, weights)
        self._cached_model = model
        self._cached_model.x = x
        self._cached_model.y = y

        self._p_0 = {f'p{key}': self._cached_pars[key].value for key in self._cached_pars.keys()}

        # Why do we do this? Because a fitting template has to have global_object instantiated outside pre-runtime
        from easyscience import global_object

        stack_status = global_object.stack.enabled
        global_object.stack.enabled = False

        kwargs = self._prepare_kwargs(tolerance, max_evaluations, **kwargs)

        try:
            model_results = self._dfo_fit(self._cached_pars, model, **kwargs)
            self._set_parameter_fit_result(model_results, stack_status)
            results = self._gen_fit_results(model_results, weights)
        except Exception as e:
            for key in self._cached_pars.keys():
                self._cached_pars[key].value = self._cached_pars_vals[key][0]
            raise FitError(e)
        return results

    def convert_to_pars_obj(self, par_list: Optional[list] = None):
        """
        Required by interface but not needed for DFO-LS
        """
        pass

    @staticmethod
    def convert_to_par_object(obj) -> None:
        """
        Required by interface but not needed for DFO-LS
        """
        pass

    def _make_model(self, parameters: Optional[List[Parameter]] = None) -> Callable:
        """
        Generate a model from the supplied `fit_function` and parameters in the base object.
        Note that this makes a callable as it needs to be initialized with *x*, *y*, *weights*

        :return: Callable model which returns residuals
        :rtype: Callable
        """
        fit_func = self._generate_fit_function()

        def _outer(obj: DFO):
            def _make_func(x, y, weights):
                dfo_pars = {}
                if not parameters:
                    for name, par in obj._cached_pars.items():
                        dfo_pars[MINIMIZER_PARAMETER_PREFIX + str(name)] = par.value
                else:
                    for par in parameters:
                        dfo_pars[MINIMIZER_PARAMETER_PREFIX + par.unique_name] = par.value

                def _residuals(pars_values: List[float]) -> np.ndarray:
                    for idx, par_name in enumerate(dfo_pars.keys()):
                        dfo_pars[par_name] = pars_values[idx]
                    return (y - fit_func(x, **dfo_pars)) / weights

                return _residuals

            return _make_func

        return _outer(self)

    def _set_parameter_fit_result(self, fit_result, stack_status, ci: float = 0.95) -> None:
        """
        Update parameters to their final values and assign a std error to them.

        :param fit_result: Fit object which contains info on the fit
        :param ci: Confidence interval for calculating errors. Default 95%
        :return: None
        :rtype: noneType
        """
        from easyscience import global_object

        pars = self._cached_pars
        if stack_status:
            for name in pars.keys():
                pars[name].value = self._cached_pars_vals[name][0]
                pars[name].error = self._cached_pars_vals[name][1]
            global_object.stack.enabled = True
            global_object.stack.beginMacro('Fitting routine')

        error_matrix = self._error_from_jacobian(fit_result.jacobian, fit_result.resid, ci)
        for idx, par in enumerate(pars.values()):
            par.value = fit_result.x[idx]
            par.error = error_matrix[idx, idx]

        if stack_status:
            global_object.stack.endMacro()

    def _gen_fit_results(self, fit_results, weights, **kwargs) -> FitResults:
        """
        Convert fit results into the unified `FitResults` format

        :param fit_result: Fit object which contains info on the fit
        :return: fit results container
        :rtype: FitResults
        """

        results = FitResults()
        for name, value in kwargs.items():
            if getattr(results, name, False):
                setattr(results, name, value)
        results.success = not bool(fit_results.flag)

        pars = {}
        for p_name, par in self._cached_pars.items():
            pars[f'p{p_name}'] = par.value
        results.p = pars

        results.p0 = self._p_0
        results.x = self._cached_model.x
        results.y_obs = self._cached_model.y
        results.y_calc = self.evaluate(results.x, minimizer_parameters=results.p)
        results.y_err = weights
        # results.residual = results.y_obs - results.y_calc
        # results.goodness_of_fit = fit_results.f

        results.minimizer_engine = self.__class__
        results.fit_args = None
        # results.check_sanity()

        return results

    @staticmethod
    def _dfo_fit(
        pars: Dict[str, Parameter],
        model: Callable,
        **kwargs,
    ):
        """
        Method to convert EasyScience styling to DFO-LS styling (yes, again)

        :param model: Model which accepts f(x[0])
        :type model: Callable
        :param kwargs: Any additional arguments for dfols.solver
        :type kwargs: dict
        :return: dfols fit results container
        """

        pars_values = np.array([par.value for par in pars.values()])

        bounds = (
            np.array([par.min for par in pars.values()]),
            np.array([par.max for par in pars.values()]),
        )
        # https://numericalalgorithmsgroup.github.io/dfols/build/html/userguide.html
        if not np.isinf(bounds).any():
            # It is only possible to scale (normalize) variables if they are bound (different from inf)
            kwargs['scaling_within_bounds'] = True

        results = dfols.solve(model, pars_values, bounds=bounds, **kwargs)

        if 'Success' not in results.msg:
            raise FitError(f'Fit failed with message: {results.msg}')

        return results

    @staticmethod
    def _prepare_kwargs(tolerance: Optional[float] = None, max_evaluations: Optional[int] = None, **kwargs) -> dict[str:str]:
        if max_evaluations is not None:
            kwargs['maxfun'] = max_evaluations  # max number of function evaluations
        if tolerance is not None:
            if 0.1 < tolerance:  # dfo module throws errer if larger value
                raise ValueError('Tolerance must be equal or smaller than 0.1')
            kwargs['rhoend'] = tolerance  # size of the trust region
        return kwargs

package class-attribute instance-attribute

package = 'dfo'

_p_0 instance-attribute

_p_0 = {}

__init__

__init__(obj, fit_function, minimizer_enum=None)

Initialize the fitting engine with a ObjBase and an arbitrary fitting function.

:param obj: Object containing elements of the Parameter class :type obj: ObjBase :param fit_function: function that when called returns y values. 'x' must be the first and only positional argument. Additional values can be supplied by keyword/value pairs :type fit_function: Callable

Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
def __init__(
    self,
    obj,  #: ObjBase,
    fit_function: Callable,
    minimizer_enum: Optional[AvailableMinimizers] = None,
):  # todo after constraint changes, add type hint: obj: ObjBase  # noqa: E501
    """
    Initialize the fitting engine with a `ObjBase` and an arbitrary fitting function.

    :param obj: Object containing elements of the `Parameter` class
    :type obj: ObjBase
    :param fit_function: function that when called returns y values. 'x' must be the first
                        and only positional argument. Additional values can be supplied by
                        keyword/value pairs
    :type fit_function: Callable
    """
    super().__init__(obj=obj, fit_function=fit_function, minimizer_enum=minimizer_enum)
    self._p_0 = {}

supported_methods staticmethod

supported_methods()
Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
@staticmethod
def supported_methods() -> List[str]:
    return ['leastsq']

all_methods staticmethod

all_methods()
Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
@staticmethod
def all_methods() -> List[str]:
    return ['leastsq']

fit

fit(
    x,
    y,
    weights,
    model=None,
    parameters=None,
    method=None,
    tolerance=None,
    max_evaluations=None,
    **kwargs,
)

Perform a fit using the DFO-ls engine.

:param x: points to be calculated at :type x: np.ndarray :param y: measured points :type y: np.ndarray :param weights: Weights for supplied measured points :type weights: np.ndarray :param model: Optional Model which is being fitted to :type model: lmModel :param parameters: Optional parameters for the fit :type parameters: List[bumpsParameter] :param kwargs: Additional arguments for the fitting function. :param method: Method for minimization :type method: str :return: Fit results :rtype: ModelResult

Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
def fit(
    self,
    x: np.ndarray,
    y: np.ndarray,
    weights: np.ndarray,
    model: Optional[Callable] = None,
    parameters: Optional[List[Parameter]] = None,
    method: str = None,
    tolerance: Optional[float] = None,
    max_evaluations: Optional[int] = None,
    **kwargs,
) -> FitResults:
    """
    Perform a fit using the DFO-ls engine.

    :param x: points to be calculated at
    :type x: np.ndarray
    :param y: measured points
    :type y: np.ndarray
    :param weights: Weights for supplied measured points
    :type weights: np.ndarray
    :param model: Optional Model which is being fitted to
    :type model: lmModel
    :param parameters: Optional parameters for the fit
    :type parameters: List[bumpsParameter]
    :param kwargs: Additional arguments for the fitting function.
    :param method: Method for minimization
    :type method: str
    :return: Fit results
    :rtype: ModelResult
    """
    x, y, weights = np.asarray(x), np.asarray(y), np.asarray(weights)

    if y.shape != x.shape:
        raise ValueError('x and y must have the same shape.')

    if weights.shape != x.shape:
        raise ValueError('Weights must have the same shape as x and y.')

    if not np.isfinite(weights).all():
        raise ValueError('Weights cannot be NaN or infinite.')

    if (weights <= 0).any():
        raise ValueError('Weights must be strictly positive and non-zero.')

    if model is None:
        model_function = self._make_model(parameters=parameters)
        model = model_function(x, y, weights)
    self._cached_model = model
    self._cached_model.x = x
    self._cached_model.y = y

    self._p_0 = {f'p{key}': self._cached_pars[key].value for key in self._cached_pars.keys()}

    # Why do we do this? Because a fitting template has to have global_object instantiated outside pre-runtime
    from easyscience import global_object

    stack_status = global_object.stack.enabled
    global_object.stack.enabled = False

    kwargs = self._prepare_kwargs(tolerance, max_evaluations, **kwargs)

    try:
        model_results = self._dfo_fit(self._cached_pars, model, **kwargs)
        self._set_parameter_fit_result(model_results, stack_status)
        results = self._gen_fit_results(model_results, weights)
    except Exception as e:
        for key in self._cached_pars.keys():
            self._cached_pars[key].value = self._cached_pars_vals[key][0]
        raise FitError(e)
    return results

convert_to_pars_obj

convert_to_pars_obj(par_list=None)

Required by interface but not needed for DFO-LS

Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
def convert_to_pars_obj(self, par_list: Optional[list] = None):
    """
    Required by interface but not needed for DFO-LS
    """
    pass

convert_to_par_object staticmethod

convert_to_par_object(obj)

Required by interface but not needed for DFO-LS

Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
@staticmethod
def convert_to_par_object(obj) -> None:
    """
    Required by interface but not needed for DFO-LS
    """
    pass

_make_model

_make_model(parameters=None)

Generate a model from the supplied fit_function and parameters in the base object. Note that this makes a callable as it needs to be initialized with x, y, weights

:return: Callable model which returns residuals :rtype: Callable

Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
def _make_model(self, parameters: Optional[List[Parameter]] = None) -> Callable:
    """
    Generate a model from the supplied `fit_function` and parameters in the base object.
    Note that this makes a callable as it needs to be initialized with *x*, *y*, *weights*

    :return: Callable model which returns residuals
    :rtype: Callable
    """
    fit_func = self._generate_fit_function()

    def _outer(obj: DFO):
        def _make_func(x, y, weights):
            dfo_pars = {}
            if not parameters:
                for name, par in obj._cached_pars.items():
                    dfo_pars[MINIMIZER_PARAMETER_PREFIX + str(name)] = par.value
            else:
                for par in parameters:
                    dfo_pars[MINIMIZER_PARAMETER_PREFIX + par.unique_name] = par.value

            def _residuals(pars_values: List[float]) -> np.ndarray:
                for idx, par_name in enumerate(dfo_pars.keys()):
                    dfo_pars[par_name] = pars_values[idx]
                return (y - fit_func(x, **dfo_pars)) / weights

            return _residuals

        return _make_func

    return _outer(self)

_set_parameter_fit_result

_set_parameter_fit_result(
    fit_result, stack_status, ci=0.95
)

Update parameters to their final values and assign a std error to them.

:param fit_result: Fit object which contains info on the fit :param ci: Confidence interval for calculating errors. Default 95% :return: None :rtype: noneType

Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
def _set_parameter_fit_result(self, fit_result, stack_status, ci: float = 0.95) -> None:
    """
    Update parameters to their final values and assign a std error to them.

    :param fit_result: Fit object which contains info on the fit
    :param ci: Confidence interval for calculating errors. Default 95%
    :return: None
    :rtype: noneType
    """
    from easyscience import global_object

    pars = self._cached_pars
    if stack_status:
        for name in pars.keys():
            pars[name].value = self._cached_pars_vals[name][0]
            pars[name].error = self._cached_pars_vals[name][1]
        global_object.stack.enabled = True
        global_object.stack.beginMacro('Fitting routine')

    error_matrix = self._error_from_jacobian(fit_result.jacobian, fit_result.resid, ci)
    for idx, par in enumerate(pars.values()):
        par.value = fit_result.x[idx]
        par.error = error_matrix[idx, idx]

    if stack_status:
        global_object.stack.endMacro()

_gen_fit_results

_gen_fit_results(fit_results, weights, **kwargs)

Convert fit results into the unified FitResults format

:param fit_result: Fit object which contains info on the fit :return: fit results container :rtype: FitResults

Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
def _gen_fit_results(self, fit_results, weights, **kwargs) -> FitResults:
    """
    Convert fit results into the unified `FitResults` format

    :param fit_result: Fit object which contains info on the fit
    :return: fit results container
    :rtype: FitResults
    """

    results = FitResults()
    for name, value in kwargs.items():
        if getattr(results, name, False):
            setattr(results, name, value)
    results.success = not bool(fit_results.flag)

    pars = {}
    for p_name, par in self._cached_pars.items():
        pars[f'p{p_name}'] = par.value
    results.p = pars

    results.p0 = self._p_0
    results.x = self._cached_model.x
    results.y_obs = self._cached_model.y
    results.y_calc = self.evaluate(results.x, minimizer_parameters=results.p)
    results.y_err = weights
    # results.residual = results.y_obs - results.y_calc
    # results.goodness_of_fit = fit_results.f

    results.minimizer_engine = self.__class__
    results.fit_args = None
    # results.check_sanity()

    return results

_dfo_fit staticmethod

_dfo_fit(pars, model, **kwargs)

Method to convert EasyScience styling to DFO-LS styling (yes, again)

:param model: Model which accepts f(x[0]) :type model: Callable :param kwargs: Any additional arguments for dfols.solver :type kwargs: dict :return: dfols fit results container

Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
@staticmethod
def _dfo_fit(
    pars: Dict[str, Parameter],
    model: Callable,
    **kwargs,
):
    """
    Method to convert EasyScience styling to DFO-LS styling (yes, again)

    :param model: Model which accepts f(x[0])
    :type model: Callable
    :param kwargs: Any additional arguments for dfols.solver
    :type kwargs: dict
    :return: dfols fit results container
    """

    pars_values = np.array([par.value for par in pars.values()])

    bounds = (
        np.array([par.min for par in pars.values()]),
        np.array([par.max for par in pars.values()]),
    )
    # https://numericalalgorithmsgroup.github.io/dfols/build/html/userguide.html
    if not np.isinf(bounds).any():
        # It is only possible to scale (normalize) variables if they are bound (different from inf)
        kwargs['scaling_within_bounds'] = True

    results = dfols.solve(model, pars_values, bounds=bounds, **kwargs)

    if 'Success' not in results.msg:
        raise FitError(f'Fit failed with message: {results.msg}')

    return results

_prepare_kwargs staticmethod

_prepare_kwargs(
    tolerance=None, max_evaluations=None, **kwargs
)
Source code in src/easyscience/fitting/minimizers/minimizer_dfo.py
@staticmethod
def _prepare_kwargs(tolerance: Optional[float] = None, max_evaluations: Optional[int] = None, **kwargs) -> dict[str:str]:
    if max_evaluations is not None:
        kwargs['maxfun'] = max_evaluations  # max number of function evaluations
    if tolerance is not None:
        if 0.1 < tolerance:  # dfo module throws errer if larger value
            raise ValueError('Tolerance must be equal or smaller than 0.1')
        kwargs['rhoend'] = tolerance  # size of the trust region
    return kwargs

DFO-LS-based minimizer implementation.

Global State Management

Global Object

easyscience.global_object.GlobalObject

GlobalObject is the assimilated knowledge of EasyScience. Every class based on EasyScience gets brought into the collective.

Source code in src/easyscience/global_object/global_object.py
@singleton
class GlobalObject:
    """
    GlobalObject is the assimilated knowledge of `EasyScience`. Every class based on `EasyScience` gets brought
    into the collective.
    """

    __log = Logger()
    __map = Map()
    __stack = None
    __debug = False

    def __init__(self):
        # Logger. This is so there's a unified logging interface
        self.log: Logger = self.__log
        # Debug. Global debugging level
        self.debug: bool = self.__debug
        # Stack. This is where the undo/redo operations are stored.
        self.stack = self.__stack
        #
        self.script: ScriptManager = ScriptManager()
        # Map. This is the conduit database between all global object species
        self.map: Map = self.__map

    def instantiate_stack(self):
        """
        The undo/redo stack references the collective. Hence it has to be imported
        after initialization.

        :return: None
        :rtype: noneType
        """
        from easyscience.global_object.undo_redo import UndoStack

        self.stack = UndoStack()

    def generate_unique_name(self, name_prefix: str) -> str:
        """
        Generate a generic unique name for the object using the class name and a global iterator.
        Names are in the format `name_prefix_0`, `name_prefix_1`, `name_prefix_2`, etc.

        :param name_prefix: The prefix to be used for the name
        """
        names_with_prefix = [name for name in self.map.vertices() if name.startswith(name_prefix + '_')]
        if names_with_prefix:
            name_with_prefix_count = [0]
            for name in names_with_prefix:
                # Strip away the prefix and trailing _
                name_without_prefix = name.replace(name_prefix + '_', '')
                if name_without_prefix.isdecimal():
                    name_with_prefix_count.append(int(name_without_prefix))
            unique_name = f'{name_prefix}_{max(name_with_prefix_count) + 1}'
        else:
            unique_name = f'{name_prefix}_0'
        return unique_name

__log class-attribute instance-attribute

__log = Logger()

__map class-attribute instance-attribute

__map = Map()

__stack class-attribute instance-attribute

__stack = None

__debug class-attribute instance-attribute

__debug = False

log instance-attribute

log = __log

debug instance-attribute

debug = __debug

stack instance-attribute

stack = __stack

script instance-attribute

script = ScriptManager()

map instance-attribute

map = __map

__init__

__init__()
Source code in src/easyscience/global_object/global_object.py
def __init__(self):
    # Logger. This is so there's a unified logging interface
    self.log: Logger = self.__log
    # Debug. Global debugging level
    self.debug: bool = self.__debug
    # Stack. This is where the undo/redo operations are stored.
    self.stack = self.__stack
    #
    self.script: ScriptManager = ScriptManager()
    # Map. This is the conduit database between all global object species
    self.map: Map = self.__map

instantiate_stack

instantiate_stack()

The undo/redo stack references the collective. Hence it has to be imported after initialization.

:return: None :rtype: noneType

Source code in src/easyscience/global_object/global_object.py
def instantiate_stack(self):
    """
    The undo/redo stack references the collective. Hence it has to be imported
    after initialization.

    :return: None
    :rtype: noneType
    """
    from easyscience.global_object.undo_redo import UndoStack

    self.stack = UndoStack()

generate_unique_name

generate_unique_name(name_prefix)

Generate a generic unique name for the object using the class name and a global iterator. Names are in the format name_prefix_0, name_prefix_1, name_prefix_2, etc.

:param name_prefix: The prefix to be used for the name

Source code in src/easyscience/global_object/global_object.py
def generate_unique_name(self, name_prefix: str) -> str:
    """
    Generate a generic unique name for the object using the class name and a global iterator.
    Names are in the format `name_prefix_0`, `name_prefix_1`, `name_prefix_2`, etc.

    :param name_prefix: The prefix to be used for the name
    """
    names_with_prefix = [name for name in self.map.vertices() if name.startswith(name_prefix + '_')]
    if names_with_prefix:
        name_with_prefix_count = [0]
        for name in names_with_prefix:
            # Strip away the prefix and trailing _
            name_without_prefix = name.replace(name_prefix + '_', '')
            if name_without_prefix.isdecimal():
                name_with_prefix_count.append(int(name_without_prefix))
        unique_name = f'{name_prefix}_{max(name_with_prefix_count) + 1}'
    else:
        unique_name = f'{name_prefix}_0'
    return unique_name

Singleton managing global state, logging, and object tracking.

Object Map

easyscience.global_object.Map

Source code in src/easyscience/global_object/map.py
class Map:
    def __init__(self):
        # A dictionary of object names and their corresponding objects
        self._store = weakref.WeakValueDictionary()
        # A dict with object names as keys and a list of their object types as values, with weak references
        self.__type_dict = {}

    def vertices(self) -> List[str]:
        """returns the vertices of a map"""
        return list(self._store.keys())

    def edges(self):
        """returns the edges of a map"""
        return self.__generate_edges()

    @property
    def argument_objs(self) -> List[str]:
        return self._nested_get('argument')

    @property
    def created_objs(self) -> List[str]:
        return self._nested_get('created')

    @property
    def created_internal(self) -> List[str]:
        return self._nested_get('created_internal')

    @property
    def returned_objs(self) -> List[str]:
        return self._nested_get('returned')

    def _nested_get(self, obj_type: str) -> List[str]:
        """Access a nested object in root by key sequence."""
        return [key for key, item in self.__type_dict.items() if obj_type in item.type]

    def get_item_by_key(self, item_id: str) -> object:
        if item_id in self._store.keys():
            return self._store[item_id]
        raise ValueError('Item not in map.')

    def is_known(self, vertex: object) -> bool:
        # All objects should have a 'unique_name' attribute
        return vertex.unique_name in self._store.keys()

    def find_type(self, vertex: object) -> List[str]:
        if self.is_known(vertex):
            return self.__type_dict[vertex.unique_name].type

    def reset_type(self, obj, default_type: str):
        if obj.unique_name in self.__type_dict.keys():
            self.__type_dict[obj.unique_name].reset_type(default_type)

    def change_type(self, obj, new_type: str):
        if obj.unique_name in self.__type_dict.keys():
            self.__type_dict[obj.unique_name].type = new_type

    def add_vertex(self, obj: object, obj_type: str = None):
        name = obj.unique_name
        if name in self._store.keys():
            raise ValueError(f'Object name {name} already exists in the graph.')
        self._store[name] = obj
        self.__type_dict[name] = _EntryList()  # Add objects type to the list of types
        self.__type_dict[name].finalizer = weakref.finalize(self._store[name], self.prune, name)
        self.__type_dict[name].type = obj_type

    def add_edge(self, start_obj: object, end_obj: object):
        if start_obj.unique_name in self.__type_dict.keys():
            self.__type_dict[start_obj.unique_name].append(end_obj.unique_name)
        else:
            raise AttributeError('Start object not in map.')

    def get_edges(self, start_obj) -> List[str]:
        if start_obj.unique_name in self.__type_dict.keys():
            return list(self.__type_dict[start_obj.unique_name])
        else:
            raise AttributeError

    def __generate_edges(self) -> list:
        """A static method generating the edges of the
        map. Edges are represented as sets
        with one (a loop back to the vertex) or two
        vertices
        """
        edges = []
        for vertex in self.__type_dict:
            for neighbour in self.__type_dict[vertex]:
                if {neighbour, vertex} not in edges:
                    edges.append({vertex, neighbour})
        return edges

    def prune_vertex_from_edge(self, parent_obj, child_obj):
        vertex1 = parent_obj.unique_name
        if child_obj is None:
            return
        vertex2 = child_obj.unique_name

        if vertex1 in self.__type_dict.keys() and vertex2 in self.__type_dict[vertex1]:
            del self.__type_dict[vertex1][self.__type_dict[vertex1].index(vertex2)]

    def prune(self, key: str):
        if key in self.__type_dict.keys():
            del self.__type_dict[key]
            del self._store[key]

    def find_isolated_vertices(self) -> list:
        """returns a list of isolated vertices."""
        graph = self.__type_dict
        isolated = []
        for vertex in graph:
            print(isolated, vertex)
            if not graph[vertex]:
                isolated += [vertex]
        return isolated

    def find_path(self, start_vertex: str, end_vertex: str, path=[]) -> list:
        """find a path from start_vertex to end_vertex
        in map"""

        graph = self.__type_dict
        path = path + [start_vertex]
        if start_vertex == end_vertex:
            return path
        if start_vertex not in graph:
            return []
        for vertex in graph[start_vertex]:
            if vertex not in path:
                extended_path = self.find_path(vertex, end_vertex, path)
                if extended_path:
                    return extended_path
        return []

    def find_all_paths(self, start_vertex: str, end_vertex: str, path=[]) -> list:
        """find all paths from start_vertex to
        end_vertex in map"""

        graph = self.__type_dict
        path = path + [start_vertex]
        if start_vertex == end_vertex:
            return [path]
        if start_vertex not in graph:
            return []
        paths = []
        for vertex in graph[start_vertex]:
            if vertex not in path:
                extended_paths = self.find_all_paths(vertex, end_vertex, path)
                for p in extended_paths:
                    paths.append(p)
        return paths

    def reverse_route(self, end_vertex: str, start_vertex: Optional[str] = None) -> List:
        """
        In this case we have an object and want to know the connections to get to another in reverse.
        We might not know the start_object. In which case we follow the shortest path to a base vertex.
        :param end_obj:
        :type end_obj:
        :param start_obj:
        :type start_obj:
        :return:
        :rtype:
        """
        path_length = sys.maxsize
        optimum_path = []
        if start_vertex is None:
            # We now have to find where to begin.....
            for possible_start, vertices in self.__type_dict.items():
                if end_vertex in vertices:
                    temp_path = self.find_path(possible_start, end_vertex)
                    if len(temp_path) < path_length:
                        path_length = len(temp_path)
                        optimum_path = temp_path
        else:
            optimum_path = self.find_path(start_vertex, end_vertex)
        optimum_path.reverse()
        return optimum_path

    def is_connected(self, vertices_encountered=None, start_vertex=None) -> bool:
        """determines if the map is connected"""
        if vertices_encountered is None:
            vertices_encountered = set()
        graph = self.__type_dict
        vertices = list(graph.keys())
        if not start_vertex:
            # chose a vertex from graph as a starting point
            start_vertex = vertices[0]
        vertices_encountered.add(start_vertex)
        if len(vertices_encountered) != len(vertices):
            for vertex in graph[start_vertex]:
                if vertex not in vertices_encountered and self.is_connected(vertices_encountered, vertex):
                    return True
        else:
            return True
        return False

    def _clear(self):
        """Reset the map to an empty state. Only to be used for testing"""
        for vertex in self.vertices():
            self.prune(vertex)
        gc.collect()
        self.__type_dict = {}

    def __repr__(self) -> str:
        return f'Map object of {len(self._store)} vertices.'

_store instance-attribute

_store = WeakValueDictionary()

__type_dict instance-attribute

__type_dict = {}

argument_objs property

argument_objs

created_objs property

created_objs

created_internal property

created_internal

returned_objs property

returned_objs

__init__

__init__()
Source code in src/easyscience/global_object/map.py
def __init__(self):
    # A dictionary of object names and their corresponding objects
    self._store = weakref.WeakValueDictionary()
    # A dict with object names as keys and a list of their object types as values, with weak references
    self.__type_dict = {}

vertices

vertices()

returns the vertices of a map

Source code in src/easyscience/global_object/map.py
def vertices(self) -> List[str]:
    """returns the vertices of a map"""
    return list(self._store.keys())

edges

edges()

returns the edges of a map

Source code in src/easyscience/global_object/map.py
def edges(self):
    """returns the edges of a map"""
    return self.__generate_edges()

_nested_get

_nested_get(obj_type)

Access a nested object in root by key sequence.

Source code in src/easyscience/global_object/map.py
def _nested_get(self, obj_type: str) -> List[str]:
    """Access a nested object in root by key sequence."""
    return [key for key, item in self.__type_dict.items() if obj_type in item.type]

get_item_by_key

get_item_by_key(item_id)
Source code in src/easyscience/global_object/map.py
def get_item_by_key(self, item_id: str) -> object:
    if item_id in self._store.keys():
        return self._store[item_id]
    raise ValueError('Item not in map.')

is_known

is_known(vertex)
Source code in src/easyscience/global_object/map.py
def is_known(self, vertex: object) -> bool:
    # All objects should have a 'unique_name' attribute
    return vertex.unique_name in self._store.keys()

find_type

find_type(vertex)
Source code in src/easyscience/global_object/map.py
def find_type(self, vertex: object) -> List[str]:
    if self.is_known(vertex):
        return self.__type_dict[vertex.unique_name].type

reset_type

reset_type(obj, default_type)
Source code in src/easyscience/global_object/map.py
def reset_type(self, obj, default_type: str):
    if obj.unique_name in self.__type_dict.keys():
        self.__type_dict[obj.unique_name].reset_type(default_type)

change_type

change_type(obj, new_type)
Source code in src/easyscience/global_object/map.py
def change_type(self, obj, new_type: str):
    if obj.unique_name in self.__type_dict.keys():
        self.__type_dict[obj.unique_name].type = new_type

add_vertex

add_vertex(obj, obj_type=None)
Source code in src/easyscience/global_object/map.py
def add_vertex(self, obj: object, obj_type: str = None):
    name = obj.unique_name
    if name in self._store.keys():
        raise ValueError(f'Object name {name} already exists in the graph.')
    self._store[name] = obj
    self.__type_dict[name] = _EntryList()  # Add objects type to the list of types
    self.__type_dict[name].finalizer = weakref.finalize(self._store[name], self.prune, name)
    self.__type_dict[name].type = obj_type

add_edge

add_edge(start_obj, end_obj)
Source code in src/easyscience/global_object/map.py
def add_edge(self, start_obj: object, end_obj: object):
    if start_obj.unique_name in self.__type_dict.keys():
        self.__type_dict[start_obj.unique_name].append(end_obj.unique_name)
    else:
        raise AttributeError('Start object not in map.')

get_edges

get_edges(start_obj)
Source code in src/easyscience/global_object/map.py
def get_edges(self, start_obj) -> List[str]:
    if start_obj.unique_name in self.__type_dict.keys():
        return list(self.__type_dict[start_obj.unique_name])
    else:
        raise AttributeError

__generate_edges

__generate_edges()

A static method generating the edges of the map. Edges are represented as sets with one (a loop back to the vertex) or two vertices

Source code in src/easyscience/global_object/map.py
def __generate_edges(self) -> list:
    """A static method generating the edges of the
    map. Edges are represented as sets
    with one (a loop back to the vertex) or two
    vertices
    """
    edges = []
    for vertex in self.__type_dict:
        for neighbour in self.__type_dict[vertex]:
            if {neighbour, vertex} not in edges:
                edges.append({vertex, neighbour})
    return edges

prune_vertex_from_edge

prune_vertex_from_edge(parent_obj, child_obj)
Source code in src/easyscience/global_object/map.py
def prune_vertex_from_edge(self, parent_obj, child_obj):
    vertex1 = parent_obj.unique_name
    if child_obj is None:
        return
    vertex2 = child_obj.unique_name

    if vertex1 in self.__type_dict.keys() and vertex2 in self.__type_dict[vertex1]:
        del self.__type_dict[vertex1][self.__type_dict[vertex1].index(vertex2)]

prune

prune(key)
Source code in src/easyscience/global_object/map.py
def prune(self, key: str):
    if key in self.__type_dict.keys():
        del self.__type_dict[key]
        del self._store[key]

find_isolated_vertices

find_isolated_vertices()

returns a list of isolated vertices.

Source code in src/easyscience/global_object/map.py
def find_isolated_vertices(self) -> list:
    """returns a list of isolated vertices."""
    graph = self.__type_dict
    isolated = []
    for vertex in graph:
        print(isolated, vertex)
        if not graph[vertex]:
            isolated += [vertex]
    return isolated

find_path

find_path(start_vertex, end_vertex, path=[])

find a path from start_vertex to end_vertex in map

Source code in src/easyscience/global_object/map.py
def find_path(self, start_vertex: str, end_vertex: str, path=[]) -> list:
    """find a path from start_vertex to end_vertex
    in map"""

    graph = self.__type_dict
    path = path + [start_vertex]
    if start_vertex == end_vertex:
        return path
    if start_vertex not in graph:
        return []
    for vertex in graph[start_vertex]:
        if vertex not in path:
            extended_path = self.find_path(vertex, end_vertex, path)
            if extended_path:
                return extended_path
    return []

find_all_paths

find_all_paths(start_vertex, end_vertex, path=[])

find all paths from start_vertex to end_vertex in map

Source code in src/easyscience/global_object/map.py
def find_all_paths(self, start_vertex: str, end_vertex: str, path=[]) -> list:
    """find all paths from start_vertex to
    end_vertex in map"""

    graph = self.__type_dict
    path = path + [start_vertex]
    if start_vertex == end_vertex:
        return [path]
    if start_vertex not in graph:
        return []
    paths = []
    for vertex in graph[start_vertex]:
        if vertex not in path:
            extended_paths = self.find_all_paths(vertex, end_vertex, path)
            for p in extended_paths:
                paths.append(p)
    return paths

reverse_route

reverse_route(end_vertex, start_vertex=None)

In this case we have an object and want to know the connections to get to another in reverse. We might not know the start_object. In which case we follow the shortest path to a base vertex. :param end_obj: :type end_obj: :param start_obj: :type start_obj: :return: :rtype:

Source code in src/easyscience/global_object/map.py
def reverse_route(self, end_vertex: str, start_vertex: Optional[str] = None) -> List:
    """
    In this case we have an object and want to know the connections to get to another in reverse.
    We might not know the start_object. In which case we follow the shortest path to a base vertex.
    :param end_obj:
    :type end_obj:
    :param start_obj:
    :type start_obj:
    :return:
    :rtype:
    """
    path_length = sys.maxsize
    optimum_path = []
    if start_vertex is None:
        # We now have to find where to begin.....
        for possible_start, vertices in self.__type_dict.items():
            if end_vertex in vertices:
                temp_path = self.find_path(possible_start, end_vertex)
                if len(temp_path) < path_length:
                    path_length = len(temp_path)
                    optimum_path = temp_path
    else:
        optimum_path = self.find_path(start_vertex, end_vertex)
    optimum_path.reverse()
    return optimum_path

is_connected

is_connected(vertices_encountered=None, start_vertex=None)

determines if the map is connected

Source code in src/easyscience/global_object/map.py
def is_connected(self, vertices_encountered=None, start_vertex=None) -> bool:
    """determines if the map is connected"""
    if vertices_encountered is None:
        vertices_encountered = set()
    graph = self.__type_dict
    vertices = list(graph.keys())
    if not start_vertex:
        # chose a vertex from graph as a starting point
        start_vertex = vertices[0]
    vertices_encountered.add(start_vertex)
    if len(vertices_encountered) != len(vertices):
        for vertex in graph[start_vertex]:
            if vertex not in vertices_encountered and self.is_connected(vertices_encountered, vertex):
                return True
    else:
        return True
    return False

_clear

_clear()

Reset the map to an empty state. Only to be used for testing

Source code in src/easyscience/global_object/map.py
def _clear(self):
    """Reset the map to an empty state. Only to be used for testing"""
    for vertex in self.vertices():
        self.prune(vertex)
    gc.collect()
    self.__type_dict = {}

__repr__

__repr__()
Source code in src/easyscience/global_object/map.py
def __repr__(self) -> str:
    return f'Map object of {len(self._store)} vertices.'

Graph-based registry for tracking object relationships and dependencies.

Undo/Redo System

easyscience.global_object.undo_redo.UndoStack

Implement a version of QUndoStack without the QT

Source code in src/easyscience/global_object/undo_redo.py
class UndoStack:
    """
    Implement a version of QUndoStack without the QT
    """

    def __init__(self, max_history: Union[int, type(None)] = None):
        self._history = deque(maxlen=max_history)
        self._future = deque(maxlen=max_history)
        self._macro_running = False
        self._command_running = False
        self._max_history = max_history
        self._enabled = False

    @property
    def enabled(self) -> bool:
        return self._enabled

    @enabled.setter
    def enabled(self, state: bool):
        if self.enabled and self._macro_running:
            self.endMacro()
        self._enabled = state

    def force_state(self, state: bool):
        self._enabled = state

    @property
    def history(self) -> deque:
        return self._history

    @property
    def future(self) -> deque:
        return self._future

    def push(self, command: T_) -> NoReturn:
        """
        Add a command to the history stack
        """
        # If we're not enabled, then what are we doing!
        if not self.enabled or self._command_running:
            # Do the command and leave.
            command.redo()
            return
        # If there's a macro add the command to the command holder
        if self._macro_running:
            self.history[0].append(command)
        else:
            # Else create the command holder and add it to the stack
            com = CommandHolder()
            com.append(command)
            self.history.appendleft(com)
        # Actually do the command
        command.redo()
        # Reset the future
        self._future = deque(maxlen=self._max_history)

    def pop(self) -> T_:
        """
        !! WARNING - TO BE USED WITH EXTREME CAUTION !!
        !! THIS IS PROBABLY NOT THE FN YOU'RE LOOKING FOR, IT CAN BREAK A LOT OF STUFF !!
        Sometimes you really don't want the last command. Remove it from the stack

        :return: None
        :rtype: None
        """
        pop_it = self._history.popleft()
        popped = pop_it.pop()
        if len(pop_it) > 0:
            self.history.appendleft(pop_it)
        return popped

    def clear(self) -> None:  # NoReturn:
        """
        Remove any commands on the stack and reset the state
        """
        self._history = deque(maxlen=self._max_history)
        self._future = deque(maxlen=self._max_history)
        self._macro_running = False

    def undo(self) -> NoReturn:
        """
        Undo the last change to the stack
        """
        if self.canUndo():
            # Move the command from the past to the future
            this_command_stack = self._history.popleft()
            self._future.appendleft(this_command_stack)

            # Execute all undo commands
            for command in this_command_stack:
                try:
                    self._command_running = True
                    command.undo()
                except Exception as e:
                    print(e)
                finally:
                    self._command_running = False

    def redo(self) -> NoReturn:
        """
        Redo the last `undo` command on the stack
        """
        if self.canRedo():
            # Move from the future to the past
            this_command_stack = self._future.popleft()
            self._history.appendleft(this_command_stack)
            # Need to go from right to left
            this_command_stack = list(this_command_stack)
            this_command_stack.reverse()
            for command in this_command_stack:
                try:
                    self._command_running = True
                    command.redo()
                except Exception as e:
                    print(e)
                finally:
                    self._command_running = False

    def beginMacro(self, text: str) -> NoReturn:
        """
        Start a bulk update i.e. multiple commands under one undo/redo command
        """
        if self._macro_running:
            raise AssertionError('Cannot start a macro when one is already running')
        com = CommandHolder(text)
        self.history.appendleft(com)
        self._macro_running = True

    def endMacro(self) -> NoReturn:
        """
        End a bulk update i.e. multiple commands under one undo/redo command
        """
        if not self._macro_running:
            raise AssertionError('Cannot end a macro when one is not running')
        self._macro_running = False

    def canUndo(self) -> bool:
        """
        Can the last command be undone?
        """
        return len(self._history) > 0 and not self._macro_running

    def canRedo(self) -> bool:
        """
        Can we redo a command?
        """
        return len(self._future) > 0 and not self._macro_running

    def redoText(self) -> str:
        """
        Text associated with a redo item.
        """
        text = ''
        if self.canRedo():
            text = self.future[0].text
        return text

    def undoText(self) -> str:
        """
        Text associated with a undo item.
        """
        text = ''
        if self.canUndo():
            text = self.history[0].text
        return text

_history instance-attribute

_history = deque(maxlen=max_history)

_future instance-attribute

_future = deque(maxlen=max_history)

_macro_running instance-attribute

_macro_running = False

_command_running instance-attribute

_command_running = False

_max_history instance-attribute

_max_history = max_history

_enabled instance-attribute

_enabled = False

enabled property writable

enabled

history property

history

future property

future

__init__

__init__(max_history=None)
Source code in src/easyscience/global_object/undo_redo.py
def __init__(self, max_history: Union[int, type(None)] = None):
    self._history = deque(maxlen=max_history)
    self._future = deque(maxlen=max_history)
    self._macro_running = False
    self._command_running = False
    self._max_history = max_history
    self._enabled = False

force_state

force_state(state)
Source code in src/easyscience/global_object/undo_redo.py
def force_state(self, state: bool):
    self._enabled = state

push

push(command)

Add a command to the history stack

Source code in src/easyscience/global_object/undo_redo.py
def push(self, command: T_) -> NoReturn:
    """
    Add a command to the history stack
    """
    # If we're not enabled, then what are we doing!
    if not self.enabled or self._command_running:
        # Do the command and leave.
        command.redo()
        return
    # If there's a macro add the command to the command holder
    if self._macro_running:
        self.history[0].append(command)
    else:
        # Else create the command holder and add it to the stack
        com = CommandHolder()
        com.append(command)
        self.history.appendleft(com)
    # Actually do the command
    command.redo()
    # Reset the future
    self._future = deque(maxlen=self._max_history)

pop

pop()

!! WARNING - TO BE USED WITH EXTREME CAUTION !! !! THIS IS PROBABLY NOT THE FN YOU'RE LOOKING FOR, IT CAN BREAK A LOT OF STUFF !! Sometimes you really don't want the last command. Remove it from the stack

:return: None :rtype: None

Source code in src/easyscience/global_object/undo_redo.py
def pop(self) -> T_:
    """
    !! WARNING - TO BE USED WITH EXTREME CAUTION !!
    !! THIS IS PROBABLY NOT THE FN YOU'RE LOOKING FOR, IT CAN BREAK A LOT OF STUFF !!
    Sometimes you really don't want the last command. Remove it from the stack

    :return: None
    :rtype: None
    """
    pop_it = self._history.popleft()
    popped = pop_it.pop()
    if len(pop_it) > 0:
        self.history.appendleft(pop_it)
    return popped

clear

clear()

Remove any commands on the stack and reset the state

Source code in src/easyscience/global_object/undo_redo.py
def clear(self) -> None:  # NoReturn:
    """
    Remove any commands on the stack and reset the state
    """
    self._history = deque(maxlen=self._max_history)
    self._future = deque(maxlen=self._max_history)
    self._macro_running = False

undo

undo()

Undo the last change to the stack

Source code in src/easyscience/global_object/undo_redo.py
def undo(self) -> NoReturn:
    """
    Undo the last change to the stack
    """
    if self.canUndo():
        # Move the command from the past to the future
        this_command_stack = self._history.popleft()
        self._future.appendleft(this_command_stack)

        # Execute all undo commands
        for command in this_command_stack:
            try:
                self._command_running = True
                command.undo()
            except Exception as e:
                print(e)
            finally:
                self._command_running = False

redo

redo()

Redo the last undo command on the stack

Source code in src/easyscience/global_object/undo_redo.py
def redo(self) -> NoReturn:
    """
    Redo the last `undo` command on the stack
    """
    if self.canRedo():
        # Move from the future to the past
        this_command_stack = self._future.popleft()
        self._history.appendleft(this_command_stack)
        # Need to go from right to left
        this_command_stack = list(this_command_stack)
        this_command_stack.reverse()
        for command in this_command_stack:
            try:
                self._command_running = True
                command.redo()
            except Exception as e:
                print(e)
            finally:
                self._command_running = False

beginMacro

beginMacro(text)

Start a bulk update i.e. multiple commands under one undo/redo command

Source code in src/easyscience/global_object/undo_redo.py
def beginMacro(self, text: str) -> NoReturn:
    """
    Start a bulk update i.e. multiple commands under one undo/redo command
    """
    if self._macro_running:
        raise AssertionError('Cannot start a macro when one is already running')
    com = CommandHolder(text)
    self.history.appendleft(com)
    self._macro_running = True

endMacro

endMacro()

End a bulk update i.e. multiple commands under one undo/redo command

Source code in src/easyscience/global_object/undo_redo.py
def endMacro(self) -> NoReturn:
    """
    End a bulk update i.e. multiple commands under one undo/redo command
    """
    if not self._macro_running:
        raise AssertionError('Cannot end a macro when one is not running')
    self._macro_running = False

canUndo

canUndo()

Can the last command be undone?

Source code in src/easyscience/global_object/undo_redo.py
def canUndo(self) -> bool:
    """
    Can the last command be undone?
    """
    return len(self._history) > 0 and not self._macro_running

canRedo

canRedo()

Can we redo a command?

Source code in src/easyscience/global_object/undo_redo.py
def canRedo(self) -> bool:
    """
    Can we redo a command?
    """
    return len(self._future) > 0 and not self._macro_running

redoText

redoText()

Text associated with a redo item.

Source code in src/easyscience/global_object/undo_redo.py
def redoText(self) -> str:
    """
    Text associated with a redo item.
    """
    text = ''
    if self.canRedo():
        text = self.future[0].text
    return text

undoText

undoText()

Text associated with a undo item.

Source code in src/easyscience/global_object/undo_redo.py
def undoText(self) -> str:
    """
    Text associated with a undo item.
    """
    text = ''
    if self.canUndo():
        text = self.history[0].text
    return text

Stack-based undo/redo system for parameter changes.

Serialization and I/O

Serializer Components

easyscience.io.SerializerComponent

This base class adds the capability of saving and loading (encoding/decoding, serializing/deserializing) easyscience objects via the encode and decode methods. The default encoder is SerializerDict, which converts the object to a dictionary.

Shortcuts for dictionary and encoding is also present.

Source code in src/easyscience/io/serializer_component.py
class SerializerComponent:
    """
    This base class adds the capability of saving and loading (encoding/decoding, serializing/deserializing) easyscience
    objects via the `encode` and `decode` methods.
    The default encoder is `SerializerDict`, which converts the object to a dictionary.

    Shortcuts for dictionary and encoding is also present.
    """

    def __deepcopy__(self, memo):
        return self.from_dict(self.as_dict())

    def encode(self, skip: Optional[List[str]] = None, encoder: Optional[SerializerBase] = None, **kwargs) -> Any:
        """
        Use an encoder to covert an EasyScience object into another format. Default is to a dictionary using `SerializerDict`.

        :param skip: List of field names as strings to skip when forming the encoded object
        :param encoder: The encoder to be used for encoding the data. Default is `SerializerDict`
        :param kwargs: Any additional key word arguments to be passed to the encoder
        :return: encoded object containing all information to reform an EasyScience object.
        """
        if encoder is None:
            encoder = SerializerDict
        encoder_obj = encoder()
        return encoder_obj.encode(self, skip=skip, **kwargs)

    @classmethod
    def decode(cls, obj: Any, decoder: Optional[SerializerBase] = None) -> Any:
        """
        Re-create an EasyScience object from the output of an encoder. The default decoder is `SerializerDict`.

        :param obj: encoded EasyScience object
        :param decoder: decoder to be used to reform the EasyScience object
        :return: Reformed EasyScience object
        """

        if decoder is None:
            decoder = SerializerDict
        return decoder.decode(obj)

    def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
        """
        Convert an EasyScience object into a full dictionary using `SerializerDict`.
        This is a shortcut for ```obj.encode(encoder=SerializerDict)```

        :param skip: List of field names as strings to skip when forming the dictionary
        :return: encoded object containing all information to reform an EasyScience object.
        """

        return self.encode(skip=skip, encoder=SerializerDict)

    @classmethod
    def from_dict(cls, obj_dict: Dict[str, Any]) -> None:
        """
        Re-create an EasyScience object from a full encoded dictionary.

        :param obj_dict: dictionary containing the serialized contents (from `SerializerDict`) of an EasyScience object
        :return: Reformed EasyScience object
        """

        return cls.decode(obj_dict, decoder=SerializerDict)

__deepcopy__

__deepcopy__(memo)
Source code in src/easyscience/io/serializer_component.py
def __deepcopy__(self, memo):
    return self.from_dict(self.as_dict())

encode

encode(skip=None, encoder=None, **kwargs)

Use an encoder to covert an EasyScience object into another format. Default is to a dictionary using SerializerDict.

:param skip: List of field names as strings to skip when forming the encoded object :param encoder: The encoder to be used for encoding the data. Default is SerializerDict :param kwargs: Any additional key word arguments to be passed to the encoder :return: encoded object containing all information to reform an EasyScience object.

Source code in src/easyscience/io/serializer_component.py
def encode(self, skip: Optional[List[str]] = None, encoder: Optional[SerializerBase] = None, **kwargs) -> Any:
    """
    Use an encoder to covert an EasyScience object into another format. Default is to a dictionary using `SerializerDict`.

    :param skip: List of field names as strings to skip when forming the encoded object
    :param encoder: The encoder to be used for encoding the data. Default is `SerializerDict`
    :param kwargs: Any additional key word arguments to be passed to the encoder
    :return: encoded object containing all information to reform an EasyScience object.
    """
    if encoder is None:
        encoder = SerializerDict
    encoder_obj = encoder()
    return encoder_obj.encode(self, skip=skip, **kwargs)

decode classmethod

decode(obj, decoder=None)

Re-create an EasyScience object from the output of an encoder. The default decoder is SerializerDict.

:param obj: encoded EasyScience object :param decoder: decoder to be used to reform the EasyScience object :return: Reformed EasyScience object

Source code in src/easyscience/io/serializer_component.py
@classmethod
def decode(cls, obj: Any, decoder: Optional[SerializerBase] = None) -> Any:
    """
    Re-create an EasyScience object from the output of an encoder. The default decoder is `SerializerDict`.

    :param obj: encoded EasyScience object
    :param decoder: decoder to be used to reform the EasyScience object
    :return: Reformed EasyScience object
    """

    if decoder is None:
        decoder = SerializerDict
    return decoder.decode(obj)

as_dict

as_dict(skip=None)

Convert an EasyScience object into a full dictionary using SerializerDict. This is a shortcut for obj.encode(encoder=SerializerDict)

:param skip: List of field names as strings to skip when forming the dictionary :return: encoded object containing all information to reform an EasyScience object.

Source code in src/easyscience/io/serializer_component.py
def as_dict(self, skip: Optional[List[str]] = None) -> Dict[str, Any]:
    """
    Convert an EasyScience object into a full dictionary using `SerializerDict`.
    This is a shortcut for ```obj.encode(encoder=SerializerDict)```

    :param skip: List of field names as strings to skip when forming the dictionary
    :return: encoded object containing all information to reform an EasyScience object.
    """

    return self.encode(skip=skip, encoder=SerializerDict)

from_dict classmethod

from_dict(obj_dict)

Re-create an EasyScience object from a full encoded dictionary.

:param obj_dict: dictionary containing the serialized contents (from SerializerDict) of an EasyScience object :return: Reformed EasyScience object

Source code in src/easyscience/io/serializer_component.py
@classmethod
def from_dict(cls, obj_dict: Dict[str, Any]) -> None:
    """
    Re-create an EasyScience object from a full encoded dictionary.

    :param obj_dict: dictionary containing the serialized contents (from `SerializerDict`) of an EasyScience object
    :return: Reformed EasyScience object
    """

    return cls.decode(obj_dict, decoder=SerializerDict)

Base class providing serialization capabilities.

easyscience.io.SerializerDict

Bases: SerializerBase

This is a serializer that can encode and decode EasyScience objects to and from a dictionary.

Source code in src/easyscience/io/serializer_dict.py
class SerializerDict(SerializerBase):
    """
    This is a serializer that can encode and decode EasyScience objects to and from a dictionary.
    """

    def encode(
        self,
        obj: SerializerComponent,
        skip: Optional[List[str]] = None,
        full_encode: bool = False,
        **kwargs,
    ):
        """
        Convert an EasyScience object to a dictionary.

        :param obj: Object to be encoded.
        :param skip: List of field names as strings to skip when forming the encoded object
        :param full_encode: Should the data also be encoded (default False)
        :param kwargs: Any additional key word arguments to be passed to the encoder
        :return: object encoded to dictionary containing all information to reform an EasyScience object.
        """

        return self._convert_to_dict(obj, skip=skip, full_encode=full_encode, **kwargs)

    @classmethod
    def decode(cls, d: Dict) -> SerializerComponent:
        """
        Re-create an EasyScience object from the dictionary representation.

        :param d: Dict representation of an EasyScience object.
        :return: EasyScience object.
        """

        return SerializerBase._convert_from_dict(d)

encode

encode(obj, skip=None, full_encode=False, **kwargs)

Convert an EasyScience object to a dictionary.

:param obj: Object to be encoded. :param skip: List of field names as strings to skip when forming the encoded object :param full_encode: Should the data also be encoded (default False) :param kwargs: Any additional key word arguments to be passed to the encoder :return: object encoded to dictionary containing all information to reform an EasyScience object.

Source code in src/easyscience/io/serializer_dict.py
def encode(
    self,
    obj: SerializerComponent,
    skip: Optional[List[str]] = None,
    full_encode: bool = False,
    **kwargs,
):
    """
    Convert an EasyScience object to a dictionary.

    :param obj: Object to be encoded.
    :param skip: List of field names as strings to skip when forming the encoded object
    :param full_encode: Should the data also be encoded (default False)
    :param kwargs: Any additional key word arguments to be passed to the encoder
    :return: object encoded to dictionary containing all information to reform an EasyScience object.
    """

    return self._convert_to_dict(obj, skip=skip, full_encode=full_encode, **kwargs)

decode classmethod

decode(d)

Re-create an EasyScience object from the dictionary representation.

:param d: Dict representation of an EasyScience object. :return: EasyScience object.

Source code in src/easyscience/io/serializer_dict.py
@classmethod
def decode(cls, d: Dict) -> SerializerComponent:
    """
    Re-create an EasyScience object from the dictionary representation.

    :param d: Dict representation of an EasyScience object.
    :return: EasyScience object.
    """

    return SerializerBase._convert_from_dict(d)

Dictionary-based serialization implementation.

easyscience.io.SerializerBase

This is the base class for creating an encoder/decoder which can convert EasyScience objects. encode and decode are abstract methods to be implemented for each serializer. It is expected that the helper function _convert_to_dict will be used as a base for encoding (or the SerializerDict as it's more flexible).

Source code in src/easyscience/io/serializer_base.py
class SerializerBase:
    """
    This is the base class for creating an encoder/decoder which can convert EasyScience objects. `encode` and `decode` are
    abstract methods to be implemented for each serializer. It is expected that the helper function `_convert_to_dict`
    will be used as a base for encoding (or the `SerializerDict` as it's more flexible).
    """

    @abstractmethod
    def encode(self, obj: SerializerComponent, skip: Optional[List[str]] = None, **kwargs) -> any:
        """
        Abstract implementation of an encoder.

        :param obj: Object to be encoded.
        :param skip: List of field names as strings to skip when forming the encoded object
        :param kwargs: Any additional key word arguments to be passed to the encoder
        :return: encoded object containing all information to reform an EasyScience object.
        """

        pass

    @classmethod
    @abstractmethod
    def decode(cls, obj: Any) -> Any:
        """
        Re-create an EasyScience object from the output of an encoder. The default decoder is `SerializerDict`.

        :param obj: encoded EasyScience object
        :return: Reformed EasyScience object
        """
        pass

    @staticmethod
    def get_arg_spec(func: Callable) -> Tuple[Any, List[str]]:
        """
        Get the full argument specification of a function (typically `__init__`)

        :param func: Function to be inspected
        :return: Tuple of argument spec and arguments
        """

        spec = getfullargspec(func)
        args = spec.args[1:]
        return spec, args

    @staticmethod
    def _encode_objs(obj: Any) -> Dict[str, Any]:
        """
        A JSON serializable dict representation of an object.

        :param obj: any object to be encoded
        :param skip: List of field names as strings to skip when forming the encoded object
        :param kwargs: Key-words to pass to `SerializerBase`
        :return: JSON encoded dictionary
        """

        if isinstance(obj, datetime.datetime):
            return {
                '@module': 'datetime',
                '@class': 'datetime',
                'string': obj.__str__(),
            }
        if np is not None:
            if isinstance(obj, np.ndarray):
                if str(obj.dtype).startswith('complex'):
                    return {
                        '@module': 'numpy',
                        '@class': 'array',
                        'dtype': obj.dtype.__str__(),
                        'data': [obj.real.tolist(), obj.imag.tolist()],
                    }
                return {
                    '@module': 'numpy',
                    '@class': 'array',
                    'dtype': obj.dtype.__str__(),
                    'data': obj.tolist(),
                }
            if isinstance(obj, np.generic):
                return obj.item()
        try:
            return _e.default(obj)
        except TypeError:
            return obj

    def _convert_to_dict(
        self,
        obj: SerializerComponent,
        skip: Optional[List[str]] = None,
        full_encode: bool = False,
        **kwargs,
    ) -> dict:
        """
        A JSON serializable dict representation of an object.
        """
        if skip is None:
            skip = []

        if full_encode:
            new_obj = SerializerBase._encode_objs(obj)
            if new_obj is not obj:
                return new_obj

        d = {'@module': obj.__module__, '@class': obj.__class__.__name__}

        try:
            parent_module = obj.__module__.split('.')[0]
            module_version = import_module(parent_module).__version__  # type: ignore
            d['@version'] = '{}'.format(module_version)
        except (AttributeError, ImportError):
            d['@version'] = None  # type: ignore

        spec, args = SerializerBase.get_arg_spec(obj.__class__.__init__)
        if hasattr(obj, '_arg_spec'):
            args = obj._arg_spec

        redirect = getattr(obj, '_REDIRECT', {})

        def runner(o):
            if full_encode:
                return SerializerBase._encode_objs(o)
            else:
                return o

        for c in args:
            if c not in skip:
                if c in redirect.keys():
                    if redirect[c] is None:
                        continue
                    a = runner(redirect[c](obj))
                else:
                    try:
                        a = runner(obj.__getattribute__(c))
                    except AttributeError:
                        try:
                            a = runner(obj.__getattribute__('_' + c))
                        except AttributeError:
                            err = True
                            if hasattr(obj, 'kwargs'):
                                # type: ignore
                                option = getattr(obj, 'kwargs')
                                if hasattr(option, c):
                                    v = getattr(option, c)
                                    delattr(option, c)
                                    d.update(runner(v))  # pylint: disable=E1101
                                    err = False
                            if hasattr(obj, '_kwargs'):
                                # type: ignore
                                option = getattr(obj, '_kwargs')
                                if hasattr(option, c):
                                    v = getattr(option, c)
                                    delattr(option, c)
                                    d.update(runner(v))  # pylint: disable=E1101
                                    err = False
                            if err:
                                raise NotImplementedError(
                                    'Unable to automatically determine as_dict '
                                    'format from class. MSONAble requires all '
                                    'args to be present as either self.argname or '
                                    'self._argname, and kwargs to be present under'
                                    'a self.kwargs variable to automatically '
                                    'determine the dict format. Alternatively, '
                                    'you can implement both as_dict and from_dict.'
                                )
                d[c] = self._recursive_encoder(a, skip=skip, encoder=self, full_encode=full_encode, **kwargs)
        if spec.varargs is not None and getattr(obj, spec.varargs, None) is not None:
            d.update({spec.varargs: getattr(obj, spec.varargs)})
        if hasattr(obj, '_kwargs'):
            if not issubclass(type(obj), MutableSequence):
                d_k = list(d.keys())
                for k, v in getattr(obj, '_kwargs').items():
                    # We should have already obtained `key` and `_key`
                    if k not in skip and k not in d_k:
                        if k[0] == '_' and k[1:] in d_k:
                            continue
                        vv = v
                        if k in redirect.keys():
                            if redirect[k] is None:
                                continue
                            vv = redirect[k](obj)
                        v_ = runner(vv)
                        d[k] = self._recursive_encoder(
                            v_,
                            skip=skip,
                            encoder=self,
                            full_encode=full_encode,
                            **kwargs,
                        )
        if isinstance(obj, Enum):
            d.update({'value': runner(obj.value)})  # pylint: disable=E1101
        if hasattr(obj, '_convert_to_dict'):
            d = obj._convert_to_dict(d, self, skip=skip, **kwargs)
        if hasattr(obj, '_global_object') and 'unique_name' not in d and 'unique_name' not in skip:
            d['unique_name'] = obj.unique_name
        return d

    @staticmethod
    def _convert_from_dict(d):
        """
        Recursive method to support decoding dicts and lists containing EasyScience objects

        :param d: Dictionary containing JSONed EasyScience objects
        :return: Reformed EasyScience object

        """
        T_ = type(d)
        if isinstance(d, dict):
            if '@module' in d and '@class' in d:
                modname = d['@module']
                classname = d['@class']
            else:
                modname = None
                classname = None
            if modname and modname not in ['bson.objectid', 'numpy']:
                if modname == 'datetime' and classname == 'datetime':
                    try:
                        dt = datetime.datetime.strptime(d['string'], '%Y-%m-%d %H:%M:%S.%f')
                    except ValueError:
                        dt = datetime.datetime.strptime(d['string'], '%Y-%m-%d %H:%M:%S')
                    return dt

                mod = __import__(modname, globals(), locals(), [classname], 0)
                if hasattr(mod, classname):
                    cls_ = getattr(mod, classname)
                    data = {k: SerializerBase._convert_from_dict(v) for k, v in d.items() if not k.startswith('@')}
                    return cls_(**data)
            elif np is not None and modname == 'numpy' and classname == 'array':
                if d['dtype'].startswith('complex'):
                    return np.array([r + i * 1j for r, i in zip(*d['data'])], dtype=d['dtype'])
                return np.array(d['data'], dtype=d['dtype'])

        if issubclass(T_, (list, MutableSequence)):
            return [SerializerBase._convert_from_dict(x) for x in d]
        return d

    @staticmethod
    def deserialize_dict(in_dict: Dict[str, Any]) -> Dict[str, Any]:
        """
        Deserialize a dictionary using from_dict for ES objects and SerializerBase otherwise.
        This method processes constructor arguments, skipping metadata keys starting with '@'.

        :param in_dict: dictionary to deserialize
        :return: deserialized dictionary with constructor arguments
        """
        d = {key: SerializerBase._deserialize_value(value) for key, value in in_dict.items() if not key.startswith('@')}
        return d

    @staticmethod
    def _deserialize_value(value: Any) -> Any:
        """
        Deserialize a single value, using specialized handling for ES objects.

        :param value:
        :return: deserialized value
        """
        if not SerializerBase._is_serialized_easyscience_object(value):
            return SerializerBase._convert_from_dict(value)

        module_name = value['@module']
        class_name = value['@class']

        try:
            cls = SerializerBase._import_class(module_name, class_name)

            # Prefer from_dict() method for ES objects
            if hasattr(cls, 'from_dict'):
                return cls.from_dict(value)
            else:
                return SerializerBase._convert_from_dict(value)

        except (ImportError, ValueError):
            # Fallback to generic deserialization if class-specific fails
            return SerializerBase._convert_from_dict(value)

    @staticmethod
    def _is_serialized_easyscience_object(value: Any) -> bool:
        """
        Check if a value represents a serialized ES object.

        :param value:
        :return: True if this is a serialized ES object
        """
        return isinstance(value, dict) and '@module' in value and value['@module'].startswith('easy') and '@class' in value

    @staticmethod
    def _import_class(module_name: str, class_name: str):
        """
        Import a class from a module name and class name.

        :param module_name: name of the module
        :param class_name: name of the class
        :return: the imported class
        :raises ImportError: if module cannot be imported
        :raises ValueError: if class is not found in module
        """
        try:
            module = __import__(module_name, globals(), locals(), [class_name], 0)
        except ImportError as e:
            raise ImportError(f'Could not import module {module_name}') from e

        if not hasattr(module, class_name):
            raise ValueError(f'Class {class_name} not found in module {module_name}.')

        return getattr(module, class_name)

    def _recursive_encoder(self, obj, skip: List[str] = [], encoder=None, full_encode=False, **kwargs):
        """
        Walk through an object encoding it
        """
        if encoder is None:
            encoder = SerializerBase()
        T_ = type(obj)
        if issubclass(T_, (list, tuple, MutableSequence)):
            # Is it a core MutableSequence?
            if hasattr(obj, 'encode') and obj.__class__.__module__ != 'builtins':  # strings have encode
                return encoder._convert_to_dict(obj, skip, full_encode, **kwargs)
            elif hasattr(obj, 'to_dict') and obj.__class__.__module__.startswith('easy'):
                return encoder._convert_to_dict(obj, skip, full_encode, **kwargs)
            else:
                return [self._recursive_encoder(it, skip, encoder, full_encode, **kwargs) for it in obj]
        if isinstance(obj, dict):
            return {kk: self._recursive_encoder(vv, skip, encoder, full_encode, **kwargs) for kk, vv in obj.items()}
        if hasattr(obj, 'encode') and obj.__class__.__module__ != 'builtins':  # strings have encode
            return encoder._convert_to_dict(obj, skip, full_encode, **kwargs)
        elif hasattr(obj, 'to_dict') and obj.__class__.__module__.startswith('easy'):
            return encoder._convert_to_dict(obj, skip, full_encode, **kwargs)
        return obj

encode abstractmethod

encode(obj, skip=None, **kwargs)

Abstract implementation of an encoder.

:param obj: Object to be encoded. :param skip: List of field names as strings to skip when forming the encoded object :param kwargs: Any additional key word arguments to be passed to the encoder :return: encoded object containing all information to reform an EasyScience object.

Source code in src/easyscience/io/serializer_base.py
@abstractmethod
def encode(self, obj: SerializerComponent, skip: Optional[List[str]] = None, **kwargs) -> any:
    """
    Abstract implementation of an encoder.

    :param obj: Object to be encoded.
    :param skip: List of field names as strings to skip when forming the encoded object
    :param kwargs: Any additional key word arguments to be passed to the encoder
    :return: encoded object containing all information to reform an EasyScience object.
    """

    pass

decode abstractmethod classmethod

decode(obj)

Re-create an EasyScience object from the output of an encoder. The default decoder is SerializerDict.

:param obj: encoded EasyScience object :return: Reformed EasyScience object

Source code in src/easyscience/io/serializer_base.py
@classmethod
@abstractmethod
def decode(cls, obj: Any) -> Any:
    """
    Re-create an EasyScience object from the output of an encoder. The default decoder is `SerializerDict`.

    :param obj: encoded EasyScience object
    :return: Reformed EasyScience object
    """
    pass

get_arg_spec staticmethod

get_arg_spec(func)

Get the full argument specification of a function (typically __init__)

:param func: Function to be inspected :return: Tuple of argument spec and arguments

Source code in src/easyscience/io/serializer_base.py
@staticmethod
def get_arg_spec(func: Callable) -> Tuple[Any, List[str]]:
    """
    Get the full argument specification of a function (typically `__init__`)

    :param func: Function to be inspected
    :return: Tuple of argument spec and arguments
    """

    spec = getfullargspec(func)
    args = spec.args[1:]
    return spec, args

_encode_objs staticmethod

_encode_objs(obj)

A JSON serializable dict representation of an object.

:param obj: any object to be encoded :param skip: List of field names as strings to skip when forming the encoded object :param kwargs: Key-words to pass to SerializerBase :return: JSON encoded dictionary

Source code in src/easyscience/io/serializer_base.py
@staticmethod
def _encode_objs(obj: Any) -> Dict[str, Any]:
    """
    A JSON serializable dict representation of an object.

    :param obj: any object to be encoded
    :param skip: List of field names as strings to skip when forming the encoded object
    :param kwargs: Key-words to pass to `SerializerBase`
    :return: JSON encoded dictionary
    """

    if isinstance(obj, datetime.datetime):
        return {
            '@module': 'datetime',
            '@class': 'datetime',
            'string': obj.__str__(),
        }
    if np is not None:
        if isinstance(obj, np.ndarray):
            if str(obj.dtype).startswith('complex'):
                return {
                    '@module': 'numpy',
                    '@class': 'array',
                    'dtype': obj.dtype.__str__(),
                    'data': [obj.real.tolist(), obj.imag.tolist()],
                }
            return {
                '@module': 'numpy',
                '@class': 'array',
                'dtype': obj.dtype.__str__(),
                'data': obj.tolist(),
            }
        if isinstance(obj, np.generic):
            return obj.item()
    try:
        return _e.default(obj)
    except TypeError:
        return obj

_convert_to_dict

_convert_to_dict(
    obj, skip=None, full_encode=False, **kwargs
)

A JSON serializable dict representation of an object.

Source code in src/easyscience/io/serializer_base.py
def _convert_to_dict(
    self,
    obj: SerializerComponent,
    skip: Optional[List[str]] = None,
    full_encode: bool = False,
    **kwargs,
) -> dict:
    """
    A JSON serializable dict representation of an object.
    """
    if skip is None:
        skip = []

    if full_encode:
        new_obj = SerializerBase._encode_objs(obj)
        if new_obj is not obj:
            return new_obj

    d = {'@module': obj.__module__, '@class': obj.__class__.__name__}

    try:
        parent_module = obj.__module__.split('.')[0]
        module_version = import_module(parent_module).__version__  # type: ignore
        d['@version'] = '{}'.format(module_version)
    except (AttributeError, ImportError):
        d['@version'] = None  # type: ignore

    spec, args = SerializerBase.get_arg_spec(obj.__class__.__init__)
    if hasattr(obj, '_arg_spec'):
        args = obj._arg_spec

    redirect = getattr(obj, '_REDIRECT', {})

    def runner(o):
        if full_encode:
            return SerializerBase._encode_objs(o)
        else:
            return o

    for c in args:
        if c not in skip:
            if c in redirect.keys():
                if redirect[c] is None:
                    continue
                a = runner(redirect[c](obj))
            else:
                try:
                    a = runner(obj.__getattribute__(c))
                except AttributeError:
                    try:
                        a = runner(obj.__getattribute__('_' + c))
                    except AttributeError:
                        err = True
                        if hasattr(obj, 'kwargs'):
                            # type: ignore
                            option = getattr(obj, 'kwargs')
                            if hasattr(option, c):
                                v = getattr(option, c)
                                delattr(option, c)
                                d.update(runner(v))  # pylint: disable=E1101
                                err = False
                        if hasattr(obj, '_kwargs'):
                            # type: ignore
                            option = getattr(obj, '_kwargs')
                            if hasattr(option, c):
                                v = getattr(option, c)
                                delattr(option, c)
                                d.update(runner(v))  # pylint: disable=E1101
                                err = False
                        if err:
                            raise NotImplementedError(
                                'Unable to automatically determine as_dict '
                                'format from class. MSONAble requires all '
                                'args to be present as either self.argname or '
                                'self._argname, and kwargs to be present under'
                                'a self.kwargs variable to automatically '
                                'determine the dict format. Alternatively, '
                                'you can implement both as_dict and from_dict.'
                            )
            d[c] = self._recursive_encoder(a, skip=skip, encoder=self, full_encode=full_encode, **kwargs)
    if spec.varargs is not None and getattr(obj, spec.varargs, None) is not None:
        d.update({spec.varargs: getattr(obj, spec.varargs)})
    if hasattr(obj, '_kwargs'):
        if not issubclass(type(obj), MutableSequence):
            d_k = list(d.keys())
            for k, v in getattr(obj, '_kwargs').items():
                # We should have already obtained `key` and `_key`
                if k not in skip and k not in d_k:
                    if k[0] == '_' and k[1:] in d_k:
                        continue
                    vv = v
                    if k in redirect.keys():
                        if redirect[k] is None:
                            continue
                        vv = redirect[k](obj)
                    v_ = runner(vv)
                    d[k] = self._recursive_encoder(
                        v_,
                        skip=skip,
                        encoder=self,
                        full_encode=full_encode,
                        **kwargs,
                    )
    if isinstance(obj, Enum):
        d.update({'value': runner(obj.value)})  # pylint: disable=E1101
    if hasattr(obj, '_convert_to_dict'):
        d = obj._convert_to_dict(d, self, skip=skip, **kwargs)
    if hasattr(obj, '_global_object') and 'unique_name' not in d and 'unique_name' not in skip:
        d['unique_name'] = obj.unique_name
    return d

_convert_from_dict staticmethod

_convert_from_dict(d)

Recursive method to support decoding dicts and lists containing EasyScience objects

:param d: Dictionary containing JSONed EasyScience objects :return: Reformed EasyScience object

Source code in src/easyscience/io/serializer_base.py
@staticmethod
def _convert_from_dict(d):
    """
    Recursive method to support decoding dicts and lists containing EasyScience objects

    :param d: Dictionary containing JSONed EasyScience objects
    :return: Reformed EasyScience object

    """
    T_ = type(d)
    if isinstance(d, dict):
        if '@module' in d and '@class' in d:
            modname = d['@module']
            classname = d['@class']
        else:
            modname = None
            classname = None
        if modname and modname not in ['bson.objectid', 'numpy']:
            if modname == 'datetime' and classname == 'datetime':
                try:
                    dt = datetime.datetime.strptime(d['string'], '%Y-%m-%d %H:%M:%S.%f')
                except ValueError:
                    dt = datetime.datetime.strptime(d['string'], '%Y-%m-%d %H:%M:%S')
                return dt

            mod = __import__(modname, globals(), locals(), [classname], 0)
            if hasattr(mod, classname):
                cls_ = getattr(mod, classname)
                data = {k: SerializerBase._convert_from_dict(v) for k, v in d.items() if not k.startswith('@')}
                return cls_(**data)
        elif np is not None and modname == 'numpy' and classname == 'array':
            if d['dtype'].startswith('complex'):
                return np.array([r + i * 1j for r, i in zip(*d['data'])], dtype=d['dtype'])
            return np.array(d['data'], dtype=d['dtype'])

    if issubclass(T_, (list, MutableSequence)):
        return [SerializerBase._convert_from_dict(x) for x in d]
    return d

deserialize_dict staticmethod

deserialize_dict(in_dict)

Deserialize a dictionary using from_dict for ES objects and SerializerBase otherwise. This method processes constructor arguments, skipping metadata keys starting with '@'.

:param in_dict: dictionary to deserialize :return: deserialized dictionary with constructor arguments

Source code in src/easyscience/io/serializer_base.py
@staticmethod
def deserialize_dict(in_dict: Dict[str, Any]) -> Dict[str, Any]:
    """
    Deserialize a dictionary using from_dict for ES objects and SerializerBase otherwise.
    This method processes constructor arguments, skipping metadata keys starting with '@'.

    :param in_dict: dictionary to deserialize
    :return: deserialized dictionary with constructor arguments
    """
    d = {key: SerializerBase._deserialize_value(value) for key, value in in_dict.items() if not key.startswith('@')}
    return d

_deserialize_value staticmethod

_deserialize_value(value)

Deserialize a single value, using specialized handling for ES objects.

:param value: :return: deserialized value

Source code in src/easyscience/io/serializer_base.py
@staticmethod
def _deserialize_value(value: Any) -> Any:
    """
    Deserialize a single value, using specialized handling for ES objects.

    :param value:
    :return: deserialized value
    """
    if not SerializerBase._is_serialized_easyscience_object(value):
        return SerializerBase._convert_from_dict(value)

    module_name = value['@module']
    class_name = value['@class']

    try:
        cls = SerializerBase._import_class(module_name, class_name)

        # Prefer from_dict() method for ES objects
        if hasattr(cls, 'from_dict'):
            return cls.from_dict(value)
        else:
            return SerializerBase._convert_from_dict(value)

    except (ImportError, ValueError):
        # Fallback to generic deserialization if class-specific fails
        return SerializerBase._convert_from_dict(value)

_is_serialized_easyscience_object staticmethod

_is_serialized_easyscience_object(value)

Check if a value represents a serialized ES object.

:param value: :return: True if this is a serialized ES object

Source code in src/easyscience/io/serializer_base.py
@staticmethod
def _is_serialized_easyscience_object(value: Any) -> bool:
    """
    Check if a value represents a serialized ES object.

    :param value:
    :return: True if this is a serialized ES object
    """
    return isinstance(value, dict) and '@module' in value and value['@module'].startswith('easy') and '@class' in value

_import_class staticmethod

_import_class(module_name, class_name)

Import a class from a module name and class name.

:param module_name: name of the module :param class_name: name of the class :return: the imported class :raises ImportError: if module cannot be imported :raises ValueError: if class is not found in module

Source code in src/easyscience/io/serializer_base.py
@staticmethod
def _import_class(module_name: str, class_name: str):
    """
    Import a class from a module name and class name.

    :param module_name: name of the module
    :param class_name: name of the class
    :return: the imported class
    :raises ImportError: if module cannot be imported
    :raises ValueError: if class is not found in module
    """
    try:
        module = __import__(module_name, globals(), locals(), [class_name], 0)
    except ImportError as e:
        raise ImportError(f'Could not import module {module_name}') from e

    if not hasattr(module, class_name):
        raise ValueError(f'Class {class_name} not found in module {module_name}.')

    return getattr(module, class_name)

_recursive_encoder

_recursive_encoder(
    obj, skip=[], encoder=None, full_encode=False, **kwargs
)

Walk through an object encoding it

Source code in src/easyscience/io/serializer_base.py
def _recursive_encoder(self, obj, skip: List[str] = [], encoder=None, full_encode=False, **kwargs):
    """
    Walk through an object encoding it
    """
    if encoder is None:
        encoder = SerializerBase()
    T_ = type(obj)
    if issubclass(T_, (list, tuple, MutableSequence)):
        # Is it a core MutableSequence?
        if hasattr(obj, 'encode') and obj.__class__.__module__ != 'builtins':  # strings have encode
            return encoder._convert_to_dict(obj, skip, full_encode, **kwargs)
        elif hasattr(obj, 'to_dict') and obj.__class__.__module__.startswith('easy'):
            return encoder._convert_to_dict(obj, skip, full_encode, **kwargs)
        else:
            return [self._recursive_encoder(it, skip, encoder, full_encode, **kwargs) for it in obj]
    if isinstance(obj, dict):
        return {kk: self._recursive_encoder(vv, skip, encoder, full_encode, **kwargs) for kk, vv in obj.items()}
    if hasattr(obj, 'encode') and obj.__class__.__module__ != 'builtins':  # strings have encode
        return encoder._convert_to_dict(obj, skip, full_encode, **kwargs)
    elif hasattr(obj, 'to_dict') and obj.__class__.__module__.startswith('easy'):
        return encoder._convert_to_dict(obj, skip, full_encode, **kwargs)
    return obj

Base serialization functionality.

Models and Examples

Polynomial Model

easyscience.models.Polynomial

Bases: ObjBase

A polynomial model.

Parameters:

Name Type Description Default
name str

The name of the model.

'polynomial'
coefficients Optional[Union[Iterable[Union[float, Parameter]], CollectionBase]]

The coefficients of the polynomial.

None
Source code in src/easyscience/models/polynomial.py
class Polynomial(ObjBase):
    """
    A polynomial model.

    Parameters
    ----------
    name : str
        The name of the model.
    coefficients : Optional[Union[Iterable[Union[float, Parameter]], CollectionBase]]
        The coefficients of the polynomial.
    """

    coefficients: ClassVar[CollectionBase]

    def __init__(
        self,
        name: str = 'polynomial',
        coefficients: Optional[Union[Iterable[Union[float, Parameter]], CollectionBase]] = None,
    ):
        super(Polynomial, self).__init__(name, coefficients=CollectionBase('coefficients'))
        if coefficients is not None:
            if issubclass(type(coefficients), CollectionBase):
                self.coefficients = coefficients
            elif isinstance(coefficients, Iterable):
                for index, item in enumerate(coefficients):
                    if issubclass(type(item), Parameter):
                        self.coefficients.append(item)
                    elif isinstance(item, float):
                        self.coefficients.append(Parameter(name='c{}'.format(index), value=item))
                    else:
                        raise TypeError('Coefficients must be floats or Parameters')
            else:
                raise TypeError('coefficients must be a list or a CollectionBase')

    def __call__(self, x: np.ndarray, *args, **kwargs) -> np.ndarray:
        return np.polyval([c.value for c in self.coefficients], x)

    def __repr__(self):
        s = []
        if len(self.coefficients) >= 1:
            s += [f'{self.coefficients[0].value}']
            if len(self.coefficients) >= 2:
                s += [f'{self.coefficients[1].value}x']
                if len(self.coefficients) >= 3:
                    s += [f'{c.value}x^{i + 2}' for i, c in enumerate(self.coefficients[2:]) if c.value != 0]
        s.reverse()
        s = ' + '.join(s)
        return 'Polynomial({}, {})'.format(self.name, s)

coefficients class-attribute

coefficients

__init__

__init__(name='polynomial', coefficients=None)
Source code in src/easyscience/models/polynomial.py
def __init__(
    self,
    name: str = 'polynomial',
    coefficients: Optional[Union[Iterable[Union[float, Parameter]], CollectionBase]] = None,
):
    super(Polynomial, self).__init__(name, coefficients=CollectionBase('coefficients'))
    if coefficients is not None:
        if issubclass(type(coefficients), CollectionBase):
            self.coefficients = coefficients
        elif isinstance(coefficients, Iterable):
            for index, item in enumerate(coefficients):
                if issubclass(type(item), Parameter):
                    self.coefficients.append(item)
                elif isinstance(item, float):
                    self.coefficients.append(Parameter(name='c{}'.format(index), value=item))
                else:
                    raise TypeError('Coefficients must be floats or Parameters')
        else:
            raise TypeError('coefficients must be a list or a CollectionBase')

__call__

__call__(x, *args, **kwargs)
Source code in src/easyscience/models/polynomial.py
def __call__(self, x: np.ndarray, *args, **kwargs) -> np.ndarray:
    return np.polyval([c.value for c in self.coefficients], x)

__repr__

__repr__()
Source code in src/easyscience/models/polynomial.py
def __repr__(self):
    s = []
    if len(self.coefficients) >= 1:
        s += [f'{self.coefficients[0].value}']
        if len(self.coefficients) >= 2:
            s += [f'{self.coefficients[1].value}x']
            if len(self.coefficients) >= 3:
                s += [f'{c.value}x^{i + 2}' for i, c in enumerate(self.coefficients[2:]) if c.value != 0]
    s.reverse()
    s = ' + '.join(s)
    return 'Polynomial({}, {})'.format(self.name, s)

Built-in polynomial model for demonstration and testing.

Job Management

Analysis and Experiments

easyscience.job.AnalysisBase

Bases: ObjBase

This virtual class allows for the creation of technique-specific Analysis objects.

Source code in src/easyscience/job/analysis.py
class AnalysisBase(ObjBase, metaclass=ABCMeta):
    """
    This virtual class allows for the creation of technique-specific Analysis objects.
    """

    def __init__(self, name: str, interface=None, *args, **kwargs):
        super(AnalysisBase, self).__init__(name, *args, **kwargs)
        self.name = name
        self._calculator = None
        self._minimizer = None
        self._fitter = None
        self.interface = interface

    @abstractmethod
    def calculate_theory(self, x: np.ndarray, **kwargs) -> np.ndarray:
        raise NotImplementedError('calculate_theory not implemented')

    @abstractmethod
    def fit(self, x: np.ndarray, y: np.ndarray, e: np.ndarray, **kwargs) -> None:
        raise NotImplementedError('fit not implemented')

    @property
    def calculator(self) -> str:
        if self._calculator is None:
            self._calculator = self.interface.current_interface_name
        return self._calculator

    @calculator.setter
    def calculator(self, value) -> None:
        # TODO: check if the calculator is available for the given JobType
        self.interface.switch(value, fitter=self._fitter)

    @property
    def minimizer(self) -> MinimizerBase:
        return self._minimizer

    @minimizer.setter
    def minimizer(self, minimizer: MinimizerBase) -> None:
        self._minimizer = minimizer

    # required dunder methods
    def __str__(self):
        return f'Analysis: {self.name}'

name instance-attribute

name = name

_calculator instance-attribute

_calculator = None

_minimizer instance-attribute

_minimizer = None

_fitter instance-attribute

_fitter = None

interface instance-attribute

interface = interface

calculator property writable

calculator

minimizer property writable

minimizer

__init__

__init__(name, interface=None, *args, **kwargs)
Source code in src/easyscience/job/analysis.py
def __init__(self, name: str, interface=None, *args, **kwargs):
    super(AnalysisBase, self).__init__(name, *args, **kwargs)
    self.name = name
    self._calculator = None
    self._minimizer = None
    self._fitter = None
    self.interface = interface

calculate_theory abstractmethod

calculate_theory(x, **kwargs)
Source code in src/easyscience/job/analysis.py
@abstractmethod
def calculate_theory(self, x: np.ndarray, **kwargs) -> np.ndarray:
    raise NotImplementedError('calculate_theory not implemented')

fit abstractmethod

fit(x, y, e, **kwargs)
Source code in src/easyscience/job/analysis.py
@abstractmethod
def fit(self, x: np.ndarray, y: np.ndarray, e: np.ndarray, **kwargs) -> None:
    raise NotImplementedError('fit not implemented')

__str__

__str__()
Source code in src/easyscience/job/analysis.py
def __str__(self):
    return f'Analysis: {self.name}'

easyscience.job.ExperimentBase

Bases: ObjBase

This virtual class allows for the creation of technique-specific Experiment objects.

Source code in src/easyscience/job/experiment.py
class ExperimentBase(ObjBase):
    """
    This virtual class allows for the creation of technique-specific Experiment objects.
    """

    def __init__(self, name: str, *args, **kwargs):
        super(ExperimentBase, self).__init__(name, *args, **kwargs)
        self._name = name

    # required dunder methods
    def __str__(self):
        return f'Experiment: {self._name}'

_name instance-attribute

_name = name

__init__

__init__(name, *args, **kwargs)
Source code in src/easyscience/job/experiment.py
def __init__(self, name: str, *args, **kwargs):
    super(ExperimentBase, self).__init__(name, *args, **kwargs)
    self._name = name

__str__

__str__()
Source code in src/easyscience/job/experiment.py
def __str__(self):
    return f'Experiment: {self._name}'

easyscience.job.JobBase

Bases: ObjBase

This virtual class allows for the creation of technique-specific Job objects.

Source code in src/easyscience/job/job.py
class JobBase(ObjBase, metaclass=ABCMeta):
    """
    This virtual class allows for the creation of technique-specific Job objects.
    """

    def __init__(self, name: str, *args, **kwargs):
        super(JobBase, self).__init__(name, *args, **kwargs)
        self.name = name
        self._theory = None
        self._experiment = None
        self._analysis = None
        self._summary = None
        self._info = None

    """
    JobBase consists of Theory, Experiment, Analysis virtual classes.
    Summary and Info classes are included to store additional information.
    """

    @property
    def theorerical_model(self):
        return self._theory

    @theorerical_model.setter
    @abstractmethod
    def theoretical_model(self, theory: TheoreticalModelBase):
        raise NotImplementedError('theory setter not implemented')

    @property
    def experiment(self):
        return self._experiment

    @experiment.setter
    @abstractmethod
    def experiment(self, experiment: ExperimentBase):
        raise NotImplementedError('experiment setter not implemented')

    @property
    def analysis(self):
        return self._analysis

    @analysis.setter
    @abstractmethod
    def analysis(self, analysis: AnalysisBase):
        raise NotImplementedError('analysis setter not implemented')

    # TODO: extend derived classes to include Summary and Info
    # @property
    # def summary(self):
    #     return self._summary

    # @summary.setter
    # @abstractmethod
    # def summary(self, summary: SummaryBase):
    #     raise NotImplementedError("summary setter not implemented")

    # @property
    # def info(self):
    #     return self._info

    # @info.setter
    # @abstractmethod
    # def info(self, info: InfoBase):
    #     raise NotImplementedError("info setter not implemented")

    @abstractmethod
    def calculate_theory(self, *args, **kwargs):
        raise NotImplementedError('calculate_theory not implemented')

    @abstractmethod
    def fit(self, *args, **kwargs):
        raise NotImplementedError('fit not implemented')

name instance-attribute

name = name

_theory instance-attribute

_theory = None

_experiment instance-attribute

_experiment = None

_analysis instance-attribute

_analysis = None

_summary instance-attribute

_summary = None

_info instance-attribute

_info = None

theorerical_model property

theorerical_model

experiment property writable

experiment

analysis property writable

analysis

__init__

__init__(name, *args, **kwargs)
Source code in src/easyscience/job/job.py
def __init__(self, name: str, *args, **kwargs):
    super(JobBase, self).__init__(name, *args, **kwargs)
    self.name = name
    self._theory = None
    self._experiment = None
    self._analysis = None
    self._summary = None
    self._info = None

theoretical_model abstractmethod

theoretical_model(theory)
Source code in src/easyscience/job/job.py
@theorerical_model.setter
@abstractmethod
def theoretical_model(self, theory: TheoreticalModelBase):
    raise NotImplementedError('theory setter not implemented')

calculate_theory abstractmethod

calculate_theory(*args, **kwargs)
Source code in src/easyscience/job/job.py
@abstractmethod
def calculate_theory(self, *args, **kwargs):
    raise NotImplementedError('calculate_theory not implemented')

fit abstractmethod

fit(*args, **kwargs)
Source code in src/easyscience/job/job.py
@abstractmethod
def fit(self, *args, **kwargs):
    raise NotImplementedError('fit not implemented')

easyscience.job.TheoreticalModelBase

Bases: ObjBase

This virtual class allows for the creation of technique-specific Theory objects.

Source code in src/easyscience/job/theoreticalmodel.py
class TheoreticalModelBase(ObjBase):
    """
    This virtual class allows for the creation of technique-specific Theory objects.
    """

    def __init__(self, name: str, *args, **kwargs):
        self._name = name
        super().__init__(name, *args, **kwargs)

    # required dunder methods
    def __str__(self):
        raise NotImplementedError('Copy not implemented')

    def as_dict(self, skip: list = []) -> dict:
        this_dict = super().as_dict(skip=skip)
        return this_dict

_name instance-attribute

_name = name

__init__

__init__(name, *args, **kwargs)
Source code in src/easyscience/job/theoreticalmodel.py
def __init__(self, name: str, *args, **kwargs):
    self._name = name
    super().__init__(name, *args, **kwargs)

__str__

__str__()
Source code in src/easyscience/job/theoreticalmodel.py
def __str__(self):
    raise NotImplementedError('Copy not implemented')

as_dict

as_dict(skip=[])
Source code in src/easyscience/job/theoreticalmodel.py
def as_dict(self, skip: list = []) -> dict:
    this_dict = super().as_dict(skip=skip)
    return this_dict

Utility Functions

Decorators

easyscience.global_object.undo_redo.property_stack

property_stack(arg, begin_macro=False)

Decorate a property setter with undo/redo functionality This decorator can be used as:

@property_stack def func() ....

or

@property_stack("This is the undo/redo text) def func() ....

In the latter case the argument is a string which might be evaluated. The possible markups for this string are;

obj - The thing being operated on func - The function being called name - The name of the function being called. old_value - The pre-set value new_value - The post-set value

An example would be Function {name}: Set from {old_value} to {new_value}

Source code in src/easyscience/global_object/undo_redo.py
def property_stack(arg: Union[str, Callable], begin_macro: bool = False) -> Callable:
    """
    Decorate a `property` setter with undo/redo functionality
    This decorator can be used as:

    @property_stack
    def func()
    ....

    or

    @property_stack("This is the undo/redo text)
    def func()
    ....

    In the latter case the argument is a string which might be evaluated.
    The possible markups for this string are;

    `obj` - The thing being operated on
    `func` - The function being called
    `name` - The name of the function being called.
    `old_value` - The pre-set value
    `new_value` - The post-set value

    An example would be `Function {name}: Set from {old_value} to {new_value}`

    """

    def make_wrapper(func: Callable, name: str, **kwargs) -> Callable:
        def wrapper(obj, *args) -> NoReturn:
            from easyscience import global_object  # Local import to avoid circular dependency

            old_value = getattr(obj, name)
            new_value = args[0]
            if issubclass(type(old_value), Iterable) or issubclass(type(new_value), Iterable):
                ret = np.all(old_value == new_value)
            else:
                ret = old_value == new_value
            if ret:
                return

            if global_object.debug:
                print(f"I'm {obj} and have been set from {old_value} to {new_value}!")

            global_object.stack.push(PropertyStack(obj, func, old_value, new_value, **kwargs))

        return functools.update_wrapper(wrapper, func)

    if isinstance(arg, Callable):
        func = arg
        name = func.__name__
        wrapper = make_wrapper(func, name)
        setattr(wrapper, 'func', func)
    else:
        txt = arg

        def wrapper(func: Callable) -> Callable:
            name = func.__name__
            inner_wrapper = make_wrapper(func, name, text=txt.format(**locals()))
            setattr(inner_wrapper, 'func', func)
            return inner_wrapper

    return wrapper

Decorator for properties that should be tracked in the undo/redo system.

Class Tools

easyscience.utils.classTools.addLoggedProp

addLoggedProp(inst, name, *args, **kwargs)
Source code in src/easyscience/utils/classTools.py
def addLoggedProp(inst: SerializerComponent, name: str, *args, **kwargs) -> None:
    cls = type(inst)
    annotations = getattr(cls, '__annotations__', False)
    if not hasattr(cls, '__perinstance'):
        cls = type(cls.__name__, (cls,), {'__module__': inst.__module__})
        cls.__perinstance = True
        if annotations:
            cls.__annotations__ = annotations
        inst.__old_class__ = inst.__class__
        inst.__class__ = cls
    setattr(cls, name, LoggedProperty(*args, **kwargs))

Utility for adding logged properties to classes.

String Utilities

easyscience.utils.string

transformation_to_string

transformation_to_string(
    matrix,
    translation_vec=(0, 0, 0),
    components=('x', 'y', 'z'),
    c='',
    delim=',',
)

Convenience method. Given matrix returns string, e.g. x+2y+1/4 :param matrix :param translation_vec :param components: either ('x', 'y', 'z') or ('a', 'b', 'c') :param c: optional additional character to print (used for magmoms) :param delim: delimiter :return: xyz string

Source code in src/easyscience/utils/string.py
def transformation_to_string(matrix, translation_vec=(0, 0, 0), components=('x', 'y', 'z'), c='', delim=','):
    """
    Convenience method. Given matrix returns string, e.g. x+2y+1/4
    :param matrix
    :param translation_vec
    :param components: either ('x', 'y', 'z') or ('a', 'b', 'c')
    :param c: optional additional character to print (used for magmoms)
    :param delim: delimiter
    :return: xyz string
    """
    parts = []
    for i in range(3):
        s = ''
        m = matrix[i]
        t = translation_vec[i]
        for j, dim in enumerate(components):
            if m[j] != 0:
                f = Fraction(m[j]).limit_denominator()
                if s != '' and f >= 0:
                    s += '+'
                if abs(f.numerator) != 1:
                    s += str(f.numerator)
                elif f < 0:
                    s += '-'
                s += c + dim
                if f.denominator != 1:
                    s += '/' + str(f.denominator)
        if t != 0:
            s += ('+' if (t > 0 and s != '') else '') + str(Fraction(t).limit_denominator())
        if s == '':
            s += '0'
        parts.append(s)
    return delim.join(parts)

Parameter Dependencies

easyscience.variable.parameter_dependency_resolver.resolve_all_parameter_dependencies

resolve_all_parameter_dependencies(obj)

Recursively find all Parameter objects in an object hierarchy and resolve their pending dependencies.

This function should be called after deserializing a complex object that contains Parameters with dependencies to ensure all dependency relationships are properly established.

:param obj: The object to search for Parameters (can be a single Parameter, list, dict, or complex object)

Source code in src/easyscience/variable/parameter_dependency_resolver.py
def resolve_all_parameter_dependencies(obj: Any) -> None:
    """
    Recursively find all Parameter objects in an object hierarchy and resolve their pending dependencies.

    This function should be called after deserializing a complex object that contains Parameters
    with dependencies to ensure all dependency relationships are properly established.

    :param obj: The object to search for Parameters (can be a single Parameter, list, dict, or complex object)
    """

    def _collect_parameters(item: Any, parameters: List[Parameter]) -> None:
        """Recursively collect all Parameter objects from an item."""
        if isinstance(item, Parameter):
            parameters.append(item)
        elif isinstance(item, dict):
            for value in item.values():
                _collect_parameters(value, parameters)
        elif isinstance(item, (list, tuple)):
            for element in item:
                _collect_parameters(element, parameters)
        elif hasattr(item, '__dict__'):
            # Check instance attributes
            for attr_name, attr_value in item.__dict__.items():
                if not attr_name.startswith('_'):  # Skip private attributes
                    _collect_parameters(attr_value, parameters)

            # Check class properties (descriptors like Parameter instances)
            for attr_name in dir(type(item)):
                if not attr_name.startswith('_'):  # Skip private attributes
                    class_attr = getattr(type(item), attr_name, None)
                    if isinstance(class_attr, property):
                        try:
                            attr_value = getattr(item, attr_name)
                            _collect_parameters(attr_value, parameters)
                        except (AttributeError, Exception):
                            # log the exception
                            print(f"Error accessing property '{attr_name}' of {item}")
                            # Skip properties that can't be accessed
                            continue

    # Collect all parameters
    all_parameters = []
    _collect_parameters(obj, all_parameters)

    # Resolve dependencies for all parameters that have pending dependencies
    resolved_count = 0
    error_count = 0
    errors = []

    for param in all_parameters:
        if hasattr(param, '_pending_dependency_string'):
            try:
                param.resolve_pending_dependencies()
                resolved_count += 1
            except Exception as e:
                error_count += 1
                serializer_id = getattr(param, '_DescriptorNumber__serializer_id', 'unknown')
                errors.append(
                    f"Failed to resolve dependencies for parameter '{param.name}'"
                    f" (unique_name: '{param.unique_name}', serializer_id: '{serializer_id}'): {e}"
                )

    # Report results
    if resolved_count > 0:
        print(f'Successfully resolved dependencies for {resolved_count} parameter(s).')

    if error_count > 0:
        error_message = f'Failed to resolve dependencies for {error_count} parameter(s):\n' + '\n'.join(errors)
        raise ValueError(error_message)

Resolve all pending parameter dependencies after deserialization.

easyscience.variable.parameter_dependency_resolver.get_parameters_with_pending_dependencies

get_parameters_with_pending_dependencies(obj)

Find all Parameter objects in an object hierarchy that have pending dependencies.

:param obj: The object to search for Parameters :return: List of Parameters with pending dependencies

Source code in src/easyscience/variable/parameter_dependency_resolver.py
def get_parameters_with_pending_dependencies(obj: Any) -> List[Parameter]:
    """
    Find all Parameter objects in an object hierarchy that have pending dependencies.

    :param obj: The object to search for Parameters
    :return: List of Parameters with pending dependencies
    """
    parameters_with_pending = []

    def _collect_pending_parameters(item: Any) -> None:
        """Recursively collect all Parameter objects with pending dependencies."""
        if isinstance(item, Parameter):
            if hasattr(item, '_pending_dependency_string'):
                parameters_with_pending.append(item)
        elif isinstance(item, dict):
            for value in item.values():
                _collect_pending_parameters(value)
        elif isinstance(item, (list, tuple)):
            for element in item:
                _collect_pending_parameters(element)
        elif hasattr(item, '__dict__'):
            # Check instance attributes
            for attr_name, attr_value in item.__dict__.items():
                if not attr_name.startswith('_'):  # Skip private attributes
                    _collect_pending_parameters(attr_value)

            # Check class properties (descriptors like Parameter instances)
            for attr_name in dir(type(item)):
                if not attr_name.startswith('_'):  # Skip private attributes
                    class_attr = getattr(type(item), attr_name, None)
                    if isinstance(class_attr, property):
                        try:
                            attr_value = getattr(item, attr_name)
                            _collect_pending_parameters(attr_value)
                        except (AttributeError, Exception):
                            # log the exception
                            print(f"Error accessing property '{attr_name}' of {item}")
                            # Skip properties that can't be accessed
                            continue

    _collect_pending_parameters(obj)
    return parameters_with_pending

Find parameters that have unresolved dependencies.

Constants and Enumerations

The easyscience.global_object is a global singleton instance managing application state.

Exception Classes

easyscience.fitting.minimizers.FitError

Bases: Exception

Source code in src/easyscience/fitting/minimizers/utils.py
class FitError(Exception):
    def __init__(self, e: Exception = None):
        self.e = e

    def __str__(self) -> str:
        s = ''
        if self.e is not None:
            s = f'{self.e}\n'
        return s + 'Something has gone wrong with the fit'

Exception raised when fitting operations fail.

Usage Examples

For practical usage examples and tutorials, see:

The API reference covers all public classes and methods. For implementation details and advanced usage patterns, refer to the source code and test suites in the repository.