nitpick.blender module

Dictionary blender and configuration file formats.

class nitpick.blender.BaseDoc(*, path: PathOrStr | None = None, string: str | None = None, obj: JsonDict | None = None)[source]

Bases: object

Base class for configuration file formats.

Parameters:
  • path – Path of the config file to be loaded.

  • string – Config in string format.

  • obj – Config object (Python dict, YamlDoc, TomlDoc instances).

property as_object: dict

String content converted to a Python object (dict, YAML object instance, etc.).

property as_string: str

Contents of the file or the original string provided when the instance was created.

abstract load() bool[source]

Load the configuration from a file, a string or a dict.

property reformatted: str

Reformat the configuration dict as a new string (it might not match the original string/file contents).

class nitpick.blender.Comparison(actual: TBaseDoc, expected: JsonDict, special_config: SpecialConfig)[source]

Bases: object

A comparison between two dictionaries, computing missing items and differences.

property diff: TBaseDoc | None

Different data.

property has_changes: bool

Return True is there is a difference or something missing.

property missing: TBaseDoc | None

Missing data.

property replace: TBaseDoc | None

Data to be replaced.

class nitpick.blender.ElementDetail(data: ElementData, key: str | list[str], index: int, scalar: bool, compact: str)[source]

Bases: object

Detailed information about an element of a list.

Method generated by attrs for class ElementDetail.

property cast_to_dict: Dict[str, Any]

Data cast to dict, for mypy.

compact: str
data: ElementData
classmethod from_data(index: int, data: Dict[str, Any] | str | int | float | CommentedMap | List[Any], jmes_key: str) ElementDetail[source]

Create an element detail from dict data.

index: int
key: str | list[str]
scalar: bool
class nitpick.blender.InlineTableTomlDecoder(_dict=<class 'dict'>)[source]

Bases: TomlDecoder

A hacky decoder to work around some bug (or unfinished work) in the Python TOML package.

https://github.com/uiri/toml/issues/362.

bounded_string(s)
embed_comments(idx, currentlevel)
get_empty_inline_table()[source]

Hackity hack for a crappy unmaintained package.

Total lack of respect, the guy doesn’t even reply: https://github.com/uiri/toml/issues/361

get_empty_table()
load_array(a)
load_inline_object(line, currentlevel, multikey=False, multibackslash=False)
load_line(line, currentlevel, multikey, multibackslash)
load_value(v, strictly_valid=True)
preserve_comment(line_no, key, comment, beginline)
class nitpick.blender.JsonDoc(*, path: PathOrStr | None = None, string: str | None = None, obj: JsonDict | None = None)[source]

Bases: BaseDoc

JSON configuration format.

property as_object: dict

String content converted to a Python object (dict, YAML object instance, etc.).

property as_string: str

Contents of the file or the original string provided when the instance was created.

load() bool[source]

Load a JSON file by its path, a string or a dict.

property reformatted: str

Reformat the configuration dict as a new string (it might not match the original string/file contents).

class nitpick.blender.ListDetail(data: ListOrCommentedSeq, elements: list[ElementDetail])[source]

Bases: object

Detailed info about a list.

Method generated by attrs for class ListDetail.

data: ListOrCommentedSeq
elements: list[ElementDetail]
find_by_key(desired: ElementDetail) ElementDetail | None[source]

Find an element by key.

classmethod from_data(data: List[Any] | CommentedSeq, jmes_key: str) ListDetail[source]

Create a list detail from list data.

nitpick.blender.SEPARATOR_QUOTED_SPLIT = '#$@'

Special unique separator for nitpick.blender.quoted_split().

class nitpick.blender.SensibleYAML[source]

Bases: YAML

YAML with sensible defaults but an inefficient dump to string.

Output of dump() as a string.

typ: ‘rt’/None -> RoundTripLoader/RoundTripDumper, (default)

‘safe’ -> SafeLoader/SafeDumper, ‘unsafe’ -> normal/unsafe Loader/Dumper ‘base’ -> baseloader

pure: if True only use Python modules input/output: needed to work as context manager plug_ins: a list of plug-in files

Xdump_all(documents: Any, stream: Any, *, transform: Any | None = None) Any

Serialize a sequence of Python objects into a YAML stream.

property block_seq_indent: Any
compact(seq_seq: Any | None = None, seq_map: Any | None = None) None
compose(stream: Path | Any) Any

Parse the first YAML document in a stream and produce the corresponding representation tree.

compose_all(stream: Path | Any) Any

Parse all YAML documents in a stream and produce corresponding representation trees.

property composer: Any
property constructor: Any
dump(data: Path | Any, stream: Any | None = None, *, transform: Any | None = None) Any
dump_all(documents: Any, stream: Path | Any, *, transform: Any | None = None) Any
dumps(data) str[source]

Dump to a string… who would want such a thing? One can dump to a file or stdout.

emit(events: Any, stream: Any) None

Emit YAML parsing events into a stream. If stream is None, return the produced string instead.

property emitter: Any
get_constructor_parser(stream: Any) Any

the old cyaml needs special setup, and therefore the stream

get_serializer_representer_emitter(stream: Any, tlca: Any) Any
property indent: Any
load(stream: Path | Any) Any

at this point you either have the non-pure Parser (which has its own reader and scanner) or you have the pure Parser. If the pure Parser is set, then set the Reader and Scanner, if not already set. If either the Scanner or Reader are set, you cannot use the non-pure Parser,

so reset it to the pure parser and set the Reader resp. Scanner if necessary

load_all(stream: Path | Any) Any
loads(string: str)[source]

Load YAML from a string… that unusual use case in a world of files only.

map(**kw: Any) Any
official_plug_ins() Any

search for list of subdirs that are plug-ins, if __file__ is not available, e.g. single file installers that are not properly emulating a file-system (issue 324) no plug-ins will be found. If any are packaged, you know which file that are and you can explicitly provide it during instantiation:

yaml = ruamel.yaml.YAML(plug_ins=[‘ruamel/yaml/jinja2/__plug_in__’])

parse(stream: Any) Any

Parse a YAML stream and produce parsing events.

property parser: Any
property reader: Any
register_class(cls: Any) Any

register a class for dumping/loading - if it has attribute yaml_tag use that to register, else use class name - if it has methods to_yaml/from_yaml use those to dump/load else dump attributes

as mapping

property representer: Any
property resolver: Any
scan(stream: Any) Any

Scan a YAML stream and produce scanning tokens.

property scanner: Any
seq(*args: Any) Any
serialize(node: Any, stream: Any | None) Any

Serialize a representation tree into a YAML stream. If stream is None, return the produced string instead.

serialize_all(nodes: Any, stream: Any | None) Any

Serialize a sequence of representation trees into a YAML stream. If stream is None, return the produced string instead.

property serializer: Any
property version: Any | None
class nitpick.blender.TomlDoc(*, path: PathOrStr | None = None, string: str | None = None, obj: JsonDict | None = None, use_tomlkit=False)[source]

Bases: BaseDoc

TOML configuration format.

property as_object: dict

String content converted to a Python object (dict, YAML object instance, etc.).

property as_string: str

Contents of the file or the original string provided when the instance was created.

load() bool[source]

Load a TOML file by its path, a string or a dict.

property reformatted: str

Reformat the configuration dict as a new string (it might not match the original string/file contents).

class nitpick.blender.YamlDoc(*, path: PathOrStr | None = None, string: str | None = None, obj: JsonDict | None = None)[source]

Bases: BaseDoc

YAML configuration format.

property as_object: dict

String content converted to a Python object (dict, YAML object instance, etc.).

property as_string: str

Contents of the file or the original string provided when the instance was created.

load() bool[source]

Load a YAML file by its path, a string or a dict.

property reformatted: str

Reformat the configuration dict as a new string (it might not match the original string/file contents).

updater: SensibleYAML
nitpick.blender.compare_lists_with_dictdiffer(actual: list | dict, expected: list | dict, *, return_list: bool = True) list | dict[source]

Compare two lists using dictdiffer.

nitpick.blender.custom_reducer(separator: str) Callable[source]

Custom reducer for flatten_dict.flatten_dict.flatten() accepting a separator.

nitpick.blender.custom_splitter(separator: str) Callable[source]

Custom splitter for flatten_dict.flatten_dict.unflatten() accepting a separator.

nitpick.blender.flatten_quotes(dict_: Dict[str, Any], separator='.') Dict[str, Any][source]

Flatten a dict keeping quotes in keys.

nitpick.blender.is_scalar(value: Dict[str, Any] | List[Any] | str | float) bool[source]

Return True if the value is NOT a dict or a list.

nitpick.blender.quote_if_dotted(key: str) str[source]

Quote the key if it has a dot.

nitpick.blender.quote_reducer(separator: str) Callable[source]

Reducer used to unflatten dicts. Quote keys when they have dots.

nitpick.blender.quoted_split(string_: str, separator='.') list[str][source]

Split a string by a separator, but considering quoted parts (single or double quotes).

>>> quoted_split("my.key.without.quotes")
['my', 'key', 'without', 'quotes']
>>> quoted_split('"double.quoted.string"')
['double.quoted.string']
>>> quoted_split('"double.quoted.string".and.after')
['double.quoted.string', 'and', 'after']
>>> quoted_split('something.before."double.quoted.string"')
['something', 'before', 'double.quoted.string']
>>> quoted_split("'single.quoted.string'")
['single.quoted.string']
>>> quoted_split("'single.quoted.string'.and.after")
['single.quoted.string', 'and', 'after']
>>> quoted_split("something.before.'single.quoted.string'")
['something', 'before', 'single.quoted.string']
nitpick.blender.quotes_splitter(flat_key: str) tuple[str, ...][source]

Split keys keeping quoted strings together.

nitpick.blender.replace_or_add_list_element(yaml_obj: CommentedSeq | CommentedMap, element: Any, key: str, index: int) None[source]

Replace or add a new element in a YAML sequence of mappings.

nitpick.blender.search_json(json_data: ElementData, jmespath_expression: ParsedResult | str, default: Any | None = None) Any[source]

Search a dictionary or list using a JMESPath expression. Return a default value if not found.

>>> data = {"root": {"app": [1, 2], "test": "something"}}
>>> search_json(data, "root.app", None)
[1, 2]
>>> search_json(data, "root.test", None)
'something'
>>> search_json(data, "root.unknown", "")
''
>>> search_json(data, "root.unknown", None)
>>> search_json(data, "root.unknown")
>>> search_json(data, jmespath.compile("root.app"), [])
[1, 2]
>>> search_json(data, jmespath.compile("root.whatever"), "xxx")
'xxx'
>>> search_json(data, "")
>>> search_json(data, None)
Parameters:
  • jmespath_expression – A compiled JMESPath expression or a string with an expression.

  • json_data – The dictionary to be searched.

  • default – Default value in case nothing is found.

Returns:

The object that was found or the default value.

nitpick.blender.set_key_if_not_empty(dict_: Dict[str, Any], key: str, value: Any) None[source]

Update the dict if the value is valid.

nitpick.blender.traverse_toml_tree(document: TOMLDocument, dictionary)[source]

Traverse a TOML document recursively and change values, keeping its formatting and comments.

nitpick.blender.traverse_yaml_tree(yaml_obj: CommentedSeq | CommentedMap, change: Dict[str, Any])[source]

Traverse a YAML document recursively and change values, keeping its formatting and comments.