nitpick.blender module
Dictionary blender and configuration file formats.
- class nitpick.blender.BaseDoc(*, path: Optional[Union[Path, str]] = None, string: Optional[str] = None, obj: Optional[Dict[str, Any]] = None)[source]
Bases:
object
Base class for configuration file formats.
- Parameters
path – Path of the config file to be loaded.
string – Config in string format.
obj – Config object (Python dict, YamlDoc, TomlDoc instances).
- property as_object: dict
String content converted to a Python object (dict, YAML object instance, etc.).
- class nitpick.blender.Comparison(actual: TBaseDoc, expected: Dict[str, Any], special_config: SpecialConfig)[source]
Bases:
object
A comparison between two dictionaries, computing missing items and differences.
- class nitpick.blender.ElementDetail(data: ElementData, key: str | list[str], index: int, scalar: bool, compact: str)[source]
Bases:
object
Detailed information about an element of a list.
Method generated by attrs for class ElementDetail.
- data: ElementData
- class nitpick.blender.InlineTableTomlDecoder(_dict=<class 'dict'>)[source]
Bases:
TomlDecoder
A hacky decoder to work around some bug (or unfinished work) in the Python TOML package.
https://github.com/uiri/toml/issues/362.
- bounded_string(s)
- embed_comments(idx, currentlevel)
- get_empty_inline_table()[source]
Hackity hack for a crappy unmaintained package.
Total lack of respect, the guy doesn’t even reply: https://github.com/uiri/toml/issues/361
- get_empty_table()
- load_array(a)
- load_inline_object(line, currentlevel, multikey=False, multibackslash=False)
- load_line(line, currentlevel, multikey, multibackslash)
- load_value(v, strictly_valid=True)
- preserve_comment(line_no, key, comment, beginline)
- class nitpick.blender.JsonDoc(*, path: Optional[Union[Path, str]] = None, string: Optional[str] = None, obj: Optional[Dict[str, Any]] = None)[source]
Bases:
BaseDoc
JSON configuration format.
- property as_object: dict
String content converted to a Python object (dict, YAML object instance, etc.).
- class nitpick.blender.ListDetail(data: ListOrCommentedSeq, elements: list[ElementDetail])[source]
Bases:
object
Detailed info about a list.
Method generated by attrs for class ListDetail.
- data: ListOrCommentedSeq
- elements: list[ElementDetail]
- find_by_key(desired: ElementDetail) ElementDetail | None [source]
Find an element by key.
- nitpick.blender.SEPARATOR_QUOTED_SPLIT = '#$@'
Special unique separator for
nitpick.blender.quoted_split()
.
- class nitpick.blender.SensibleYAML[source]
Bases:
YAML
YAML with sensible defaults but an inefficient dump to string.
- typ: ‘rt’/None -> RoundTripLoader/RoundTripDumper, (default)
‘safe’ -> SafeLoader/SafeDumper, ‘unsafe’ -> normal/unsafe Loader/Dumper ‘base’ -> baseloader
pure: if True only use Python modules input/output: needed to work as context manager plug_ins: a list of plug-in files
- Xdump_all(documents: Any, stream: Any, *, transform: Any = None) Any
Serialize a sequence of Python objects into a YAML stream.
- property block_seq_indent
- compose(stream: Union[Path, StreamTextType]) Any
Parse the first YAML document in a stream and produce the corresponding representation tree.
- compose_all(stream: Union[Path, StreamTextType]) Any
Parse all YAML documents in a stream and produce corresponding representation trees.
- property composer
- property constructor
- dump(data: Any, stream: Union[Path, StreamType] = None, *, transform: Any = None) Any
- dump_all(documents: Any, stream: Union[Path, StreamType], *, transform: Any = None) Any
- dumps(data) str [source]
Dump to a string… who would want such a thing? One can dump to a file or stdout.
- emit(events: Any, stream: Any) None
Emit YAML parsing events into a stream. If stream is None, return the produced string instead.
- property emitter
- get_constructor_parser(stream: StreamTextType) Any
the old cyaml needs special setup, and therefore the stream
- get_serializer_representer_emitter(stream: StreamType, tlca: Any) Any
- property indent
- load(stream: Union[Path, StreamTextType]) Any
at this point you either have the non-pure Parser (which has its own reader and scanner) or you have the pure Parser. If the pure Parser is set, then set the Reader and Scanner, if not already set. If either the Scanner or Reader are set, you cannot use the non-pure Parser,
so reset it to the pure parser and set the Reader resp. Scanner if necessary
- load_all(stream: Union[Path, StreamTextType]) Any
- loads(string: str)[source]
Load YAML from a string… that unusual use case in a world of files only.
- map(**kw: Any) Any
- official_plug_ins() Any
search for list of subdirs that are plug-ins, if __file__ is not available, e.g. single file installers that are not properly emulating a file-system (issue 324) no plug-ins will be found. If any are packaged, you know which file that are and you can explicitly provide it during instantiation:
yaml = ruamel.yaml.YAML(plug_ins=[‘ruamel/yaml/jinja2/__plug_in__’])
- parse(stream: StreamTextType) Any
Parse a YAML stream and produce parsing events.
- property parser
- property reader
- register_class(cls: Any) Any
register a class for dumping loading - if it has attribute yaml_tag use that to register, else use class name - if it has methods to_yaml/from_yaml use those to dump/load else dump attributes
as mapping
- property representer
- property resolver
- scan(stream: StreamTextType) Any
Scan a YAML stream and produce scanning tokens.
- property scanner
- seq(*args: Any) Any
- serialize(node: Any, stream: Optional[StreamType]) Any
Serialize a representation tree into a YAML stream. If stream is None, return the produced string instead.
- serialize_all(nodes: Any, stream: Optional[StreamType]) Any
Serialize a sequence of representation trees into a YAML stream. If stream is None, return the produced string instead.
- property serializer
- class nitpick.blender.TomlDoc(*, path: Optional[Union[Path, str]] = None, string: Optional[str] = None, obj: Optional[Dict[str, Any]] = None, use_tomlkit=False)[source]
Bases:
BaseDoc
TOML configuration format.
- property as_object: dict
String content converted to a Python object (dict, YAML object instance, etc.).
- class nitpick.blender.YamlDoc(*, path: Optional[Union[Path, str]] = None, string: Optional[str] = None, obj: Optional[Dict[str, Any]] = None)[source]
Bases:
BaseDoc
YAML configuration format.
- property as_object: dict
String content converted to a Python object (dict, YAML object instance, etc.).
- property as_string: str
Contents of the file or the original string provided when the instance was created.
- property reformatted: str
Reformat the configuration dict as a new string (it might not match the original string/file contents).
- updater: SensibleYAML
- nitpick.blender.compare_lists_with_dictdiffer(actual: list | dict, expected: list | dict, *, return_list: bool = True) list | dict [source]
Compare two lists using dictdiffer.
- nitpick.blender.custom_reducer(separator: str) Callable [source]
Custom reducer for
flatten_dict.flatten_dict.flatten()
accepting a separator.
- nitpick.blender.custom_splitter(separator: str) Callable [source]
Custom splitter for
flatten_dict.flatten_dict.unflatten()
accepting a separator.
- nitpick.blender.flatten_quotes(dict_: Dict[str, Any], separator='.') Dict[str, Any] [source]
Flatten a dict keeping quotes in keys.
- nitpick.blender.is_scalar(value: Union[Dict[str, Any], List[Any], str, float]) bool [source]
Return True if the value is NOT a dict or a list.
- nitpick.blender.quote_reducer(separator: str) Callable [source]
Reducer used to unflatten dicts. Quote keys when they have dots.
- nitpick.blender.quoted_split(string_: str, separator='.') list[str] [source]
Split a string by a separator, but considering quoted parts (single or double quotes).
>>> quoted_split("my.key.without.quotes") ['my', 'key', 'without', 'quotes'] >>> quoted_split('"double.quoted.string"') ['double.quoted.string'] >>> quoted_split('"double.quoted.string".and.after') ['double.quoted.string', 'and', 'after'] >>> quoted_split('something.before."double.quoted.string"') ['something', 'before', 'double.quoted.string'] >>> quoted_split("'single.quoted.string'") ['single.quoted.string'] >>> quoted_split("'single.quoted.string'.and.after") ['single.quoted.string', 'and', 'after'] >>> quoted_split("something.before.'single.quoted.string'") ['something', 'before', 'single.quoted.string']
- nitpick.blender.quotes_splitter(flat_key: str) tuple[str, ...] [source]
Split keys keeping quoted strings together.
- nitpick.blender.replace_or_add_list_element(yaml_obj: Union[CommentedSeq, CommentedMap], element: Any, key: str, index: int) None [source]
Replace or add a new element in a YAML sequence of mappings.
- nitpick.blender.search_json(json_data: ElementData, jmespath_expression: ParsedResult | str, default: Any = None) Any [source]
Search a dictionary or list using a JMESPath expression. Return a default value if not found.
>>> data = {"root": {"app": [1, 2], "test": "something"}} >>> search_json(data, "root.app", None) [1, 2] >>> search_json(data, "root.test", None) 'something' >>> search_json(data, "root.unknown", "") '' >>> search_json(data, "root.unknown", None)
>>> search_json(data, "root.unknown")
>>> search_json(data, jmespath.compile("root.app"), []) [1, 2] >>> search_json(data, jmespath.compile("root.whatever"), "xxx") 'xxx' >>> search_json(data, "")
>>> search_json(data, None)
- Parameters
jmespath_expression – A compiled JMESPath expression or a string with an expression.
json_data – The dictionary to be searched.
default – Default value in case nothing is found.
- Returns
The object that was found or the default value.
- nitpick.blender.set_key_if_not_empty(dict_: Dict[str, Any], key: str, value: Any) None [source]
Update the dict if the value is valid.