Featurizers¶
DeepChem contains an extensive collection of featurizers. If you haven’t run into this terminology before, a “featurizer” is chunk of code which transforms raw input data into a processed form suitable for machine learning. Machine learning methods often need data to be pre-chewed for them to process. Think of this like a mama penguin chewing up food so the baby penguin can digest it easily.
Now if you’ve watched a few introductory deep learning lectures, you might ask, why do we need something like a featurizer? Isn’t part of the promise of deep learning that we can learn patterns directly from raw data?
Unfortunately it turns out that deep learning techniques need
featurizers just like normal machine learning methods do. Arguably,
they are less dependent on sophisticated featurizers and more capable
of learning sophisticated patterns from simpler data. But
nevertheless, deep learning systems can’t simply chew up raw files.
For this reason, deepchem
provides an extensive collection of
featurization methods which we will review on this page.
Contents
Molecule Featurizers¶
These featurizers work with datasets of molecules.
Graph Convolution Featurizers¶
We are simplifying our graph convolution models by a joint data representation (GraphData
)
in a future version of DeepChem, so we provide several featurizers.
ConvMolFeaturizer
and WeaveFeaturizer
are used
with graph convolution models which inherited KerasModel
.
ConvMolFeaturizer
is used with graph convolution models
except WeaveModel
. WeaveFeaturizer
are only used with WeaveModel
.
On the other hand, MolGraphConvFeaturizer
is used
with graph convolution models which inherited TorchModel
.
MolGanFeaturizer
will be used with MolGAN model,
a GAN model for generation of small molecules.
ConvMolFeaturizer¶
- class ConvMolFeaturizer(master_atom: bool = False, use_chirality: bool = False, atom_properties: Iterable[str] = [], per_atom_fragmentation: bool = False)[source]¶
This class implements the featurization to implement Duvenaud graph convolutions.
Duvenaud graph convolutions [1]_ construct a vector of descriptors for each atom in a molecule. The featurizer computes that vector of local descriptors.
Examples
>>> import deepchem as dc >>> smiles = ["C", "CCC"] >>> featurizer=dc.feat.ConvMolFeaturizer(per_atom_fragmentation=False) >>> f = featurizer.featurize(smiles) >>> # Using ConvMolFeaturizer to create featurized fragments derived from molecules of interest. ... # This is used only in the context of performing interpretation of models using atomic ... # contributions (atom-based model interpretation) ... smiles = ["C", "CCC"] >>> featurizer=dc.feat.ConvMolFeaturizer(per_atom_fragmentation=True) >>> f = featurizer.featurize(smiles) >>> len(f) # contains 2 lists with featurized fragments from 2 mols 2
See also
Detailed
References
- 1
Duvenaud, David K., et al. “Convolutional networks on graphs for learning molecular fingerprints.” Advances in neural information processing systems. 2015.
Note
This class requires RDKit to be installed.
- __init__(master_atom: bool = False, use_chirality: bool = False, atom_properties: Iterable[str] = [], per_atom_fragmentation: bool = False)[source]¶
- Parameters
master_atom (Boolean) – if true create a fake atom with bonds to every other atom. the initialization is the mean of the other atom features in the molecule. This technique is briefly discussed in Neural Message Passing for Quantum Chemistry https://arxiv.org/pdf/1704.01212.pdf
use_chirality (Boolean) – if true then make the resulting atom features aware of the chirality of the molecules in question
atom_properties (list of string or None) – properties in the RDKit Mol object to use as additional atom-level features in the larger molecular feature. If None, then no atom-level properties are used. Properties should be in the RDKit mol object should be in the form atom XXXXXXXX NAME where XXXXXXXX is a zero-padded 8 digit number coresponding to the zero-indexed atom index of each atom and NAME is the name of the property provided in atom_properties. So “atom 00000000 sasa” would be the name of the molecule level property in mol where the solvent accessible surface area of atom 0 would be stored.
per_atom_fragmentation (Boolean) –
If True, then multiple “atom-depleted” versions of each molecule will be created (using featurize() method). For each molecule, atoms are removed one at a time and the resulting molecule is featurized. The result is a list of ConvMol objects, one with each heavy atom removed. This is useful for subsequent model interpretation: finding atoms favorable/unfavorable for (modelled) activity. This option is typically used in combination with a FlatteningTransformer to split the lists into separate samples.
Since ConvMol is an object and not a numpy array, need to set dtype to object.
- featurize(datapoints: Union[Any, str, Iterable[Any], Iterable[str]], log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Override parent: aim is to add handling atom-depleted molecules featurization
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
WeaveFeaturizer¶
- class WeaveFeaturizer(graph_distance: bool = True, explicit_H: bool = False, use_chirality: bool = False, max_pair_distance: Optional[int] = None)[source]¶
This class implements the featurization to implement Weave convolutions.
Weave convolutions were introduced in [1]_. Unlike Duvenaud graph convolutions, weave convolutions require a quadratic matrix of interaction descriptors for each pair of atoms. These extra descriptors may provide for additional descriptive power but at the cost of a larger featurized dataset.
Examples
>>> import deepchem as dc >>> mols = ["CCC"] >>> featurizer = dc.feat.WeaveFeaturizer() >>> features = featurizer.featurize(mols) >>> type(features[0]) <class 'deepchem.feat.mol_graphs.WeaveMol'> >>> features[0].get_num_atoms() # 3 atoms in compound 3 >>> features[0].get_num_features() # feature size 75 >>> type(features[0].get_atom_features()) <class 'numpy.ndarray'> >>> features[0].get_atom_features().shape (3, 75) >>> type(features[0].get_pair_features()) <class 'numpy.ndarray'> >>> features[0].get_pair_features().shape (9, 14)
References
- 1
Kearnes, Steven, et al. “Molecular graph convolutions: moving beyond fingerprints.” Journal of computer-aided molecular design 30.8 (2016): 595-608.
Note
This class requires RDKit to be installed.
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
- __init__(graph_distance: bool = True, explicit_H: bool = False, use_chirality: bool = False, max_pair_distance: Optional[int] = None)[source]¶
Initialize this featurizer with set parameters.
- Parameters
graph_distance (bool, (default True)) – If True, use graph distance for distance features. Otherwise, use Euclidean distance. Note that this means that molecules that this featurizer is invoked on must have valid conformer information if this option is set.
explicit_H (bool, (default False)) – If true, model hydrogens in the molecule.
use_chirality (bool, (default False)) – If true, use chiral information in the featurization
max_pair_distance (Optional[int], (default None)) – This value can be a positive integer or None. This parameter determines the maximum graph distance at which pair features are computed. For example, if max_pair_distance==2, then pair features are computed only for atoms at most graph distance 2 apart. If max_pair_distance is None, all pairs are considered (effectively infinite max_pair_distance)
MolGanFeaturizer¶
- class MolGanFeaturizer(max_atom_count: int = 9, kekulize: bool = True, bond_labels: Optional[List[Any]] = None, atom_labels: Optional[List[int]] = None)[source]¶
Featurizer for MolGAN de-novo molecular generation [1]_. The default representation is in form of GraphMatrix object. It is wrapper for two matrices containing atom and bond type information. The class also provides reverse capabilities.
Examples
>>> import deepchem as dc >>> from rdkit import Chem >>> rdkit_mol, smiles_mol = Chem.MolFromSmiles('CCC'), 'C1=CC=CC=C1' >>> molecules = [rdkit_mol, smiles_mol] >>> featurizer = dc.feat.MolGanFeaturizer() >>> features = featurizer.featurize(molecules) >>> len(features) # 2 molecules 2 >>> type(features[0]) <class 'deepchem.feat.molecule_featurizers.molgan_featurizer.GraphMatrix'> >>> molecules = featurizer.defeaturize(features) # defeaturization >>> type(molecules[0]) <class 'rdkit.Chem.rdchem.Mol'>
- __init__(max_atom_count: int = 9, kekulize: bool = True, bond_labels: Optional[List[Any]] = None, atom_labels: Optional[List[int]] = None)[source]¶
- Parameters
max_atom_count (int, default 9) – Maximum number of atoms used for creation of adjacency matrix. Molecules cannot have more atoms than this number Implicit hydrogens do not count.
kekulize (bool, default True) – Should molecules be kekulized. Solves number of issues with defeaturization when used.
bond_labels (List[RDKitBond]) – List of types of bond used for generation of adjacency matrix
atom_labels (List[int]) – List of atomic numbers used for generation of node features
References
- 1
Nicola De Cao et al. “MolGAN: An implicit generative model for small molecular graphs” (2018), https://arxiv.org/abs/1805.11973
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
- defeaturize(graphs: Union[deepchem.feat.molecule_featurizers.molgan_featurizer.GraphMatrix, Sequence[deepchem.feat.molecule_featurizers.molgan_featurizer.GraphMatrix]], log_every_n: int = 1000) numpy.ndarray [source]¶
Calculates molecules from corresponding GraphMatrix objects.
- Parameters
graphs (GraphMatrix / iterable) – GraphMatrix object or corresponding iterable
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing RDKitMol objext.
- Return type
np.ndarray
MolGraphConvFeaturizer¶
- class MolGraphConvFeaturizer(use_edges: bool = False, use_chirality: bool = False, use_partial_charge: bool = False)[source]¶
This class is a featurizer of general graph convolution networks for molecules.
The default node(atom) and edge(bond) representations are based on WeaveNet paper. If you want to use your own representations, you could use this class as a guide to define your original Featurizer. In many cases, it’s enough to modify return values of construct_atom_feature or construct_bond_feature.
The default node representation are constructed by concatenating the following values, and the feature length is 30.
Atom type: A one-hot vector of this atom, “C”, “N”, “O”, “F”, “P”, “S”, “Cl”, “Br”, “I”, “other atoms”.
Formal charge: Integer electronic charge.
Hybridization: A one-hot vector of “sp”, “sp2”, “sp3”.
Hydrogen bonding: A one-hot vector of whether this atom is a hydrogen bond donor or acceptor.
Aromatic: A one-hot vector of whether the atom belongs to an aromatic ring.
Degree: A one-hot vector of the degree (0-5) of this atom.
Number of Hydrogens: A one-hot vector of the number of hydrogens (0-4) that this atom connected.
Chirality: A one-hot vector of the chirality, “R” or “S”. (Optional)
Partial charge: Calculated partial charge. (Optional)
The default edge representation are constructed by concatenating the following values, and the feature length is 11.
Bond type: A one-hot vector of the bond type, “single”, “double”, “triple”, or “aromatic”.
Same ring: A one-hot vector of whether the atoms in the pair are in the same ring.
Conjugated: A one-hot vector of whether this bond is conjugated or not.
Stereo: A one-hot vector of the stereo configuration of a bond.
If you want to know more details about features, please check the paper [1]_ and utilities in deepchem.utils.molecule_feature_utils.py.
Examples
>>> smiles = ["C1CCC1", "C1=CC=CN=C1"] >>> featurizer = MolGraphConvFeaturizer(use_edges=True) >>> out = featurizer.featurize(smiles) >>> type(out[0]) <class 'deepchem.feat.graph_data.GraphData'> >>> out[0].num_node_features 30 >>> out[0].num_edge_features 11
References
- 1
Kearnes, Steven, et al. “Molecular graph convolutions: moving beyond fingerprints.” Journal of computer-aided molecular design 30.8 (2016):595-608.
Note
This class requires RDKit to be installed.
- __init__(use_edges: bool = False, use_chirality: bool = False, use_partial_charge: bool = False)[source]¶
- Parameters
use_edges (bool, default False) – Whether to use edge features or not.
use_chirality (bool, default False) – Whether to use chirality information or not. If True, featurization becomes slow.
use_partial_charge (bool, default False) – Whether to use partial charge data or not. If True, this featurizer computes gasteiger charges. Therefore, there is a possibility to fail to featurize for some molecules and featurization becomes slow.
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
PagtnMolGraphFeaturizer¶
- class PagtnMolGraphFeaturizer(max_length=5)[source]¶
This class is a featuriser of PAGTN graph networks for molecules.
The featurization is based on PAGTN model. It is slightly more computationally intensive than default Graph Convolution Featuriser, but it builds a Molecular Graph connecting all atom pairs accounting for interactions of an atom with every other atom in the Molecule. According to the paper, interactions between two pairs of atom are dependent on the relative distance between them and and hence, the function needs to calculate the shortest path between them.
The default node representation is constructed by concatenating the following values, and the feature length is 94.
Atom type: One hot encoding of the atom type. It consists of the most possible elements in a chemical compound.
Formal charge: One hot encoding of formal charge of the atom.
Degree: One hot encoding of the atom degree
- Explicit Valence: One hot encoding of explicit valence of an atom. The supported possibilities
include
0 - 6
.
- Implicit Valence: One hot encoding of implicit valence of an atom. The supported possibilities
include
0 - 5
.
Aromaticity: Boolean representing if an atom is aromatic.
The default edge representation is constructed by concatenating the following values, and the feature length is 42. It builds a complete graph where each node is connected to every other node. The edge representations are calculated based on the shortest path between two nodes (choose any one if multiple exist). Each bond encountered in the shortest path is used to calculate edge features.
Bond type: A one-hot vector of the bond type, “single”, “double”, “triple”, or “aromatic”.
Conjugated: A one-hot vector of whether this bond is conjugated or not.
Same ring: A one-hot vector of whether the atoms in the pair are in the same ring.
Ring Size and Aromaticity: One hot encoding of atoms in pair based on ring size and aromaticity.
Distance: One hot encoding of the distance between pair of atoms.
Examples
>>> from deepchem.feat import PagtnMolGraphFeaturizer >>> smiles = ["C1CCC1", "C1=CC=CN=C1"] >>> featurizer = PagtnMolGraphFeaturizer(max_length=5) >>> out = featurizer.featurize(smiles) >>> type(out[0]) <class 'deepchem.feat.graph_data.GraphData'> >>> out[0].num_node_features 94 >>> out[0].num_edge_features 42
References
- 1
Chen, Barzilay, Jaakkola “Path-Augmented Graph Transformer Network” 10.26434/chemrxiv.8214422.
Note
This class requires RDKit to be installed.
- __init__(max_length=5)[source]¶
- Parameters
max_length (int) – Maximum distance up to which shortest paths must be considered. Paths shorter than max_length will be padded and longer will be truncated, default to
5
.
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
DMPNNFeaturizer¶
- class DMPNNFeaturizer(features_generators: Optional[List[str]] = None, is_adding_hs: bool = False, use_original_atom_ranks: bool = False)[source]¶
This class is a featurizer for Directed Message Passing Neural Network (D-MPNN) implementation
The default node(atom) and edge(bond) representations are based on Analyzing Learned Molecular Representations for Property Prediction paper.
The default node representation are constructed by concatenating the following values, and the feature length is 133.
Atomic num: A one-hot vector of this atom, in a range of first 100 atoms.
Degree: A one-hot vector of the degree (0-5) of this atom.
Formal charge: Integer electronic charge, -1, -2, 1, 2, 0.
Chirality: A one-hot vector of the chirality tag (0-3) of this atom.
Number of Hydrogens: A one-hot vector of the number of hydrogens (0-4) that this atom connected.
Hybridization: A one-hot vector of “SP”, “SP2”, “SP3”, “SP3D”, “SP3D2”.
Aromatic: A one-hot vector of whether the atom belongs to an aromatic ring.
Mass: Atomic mass * 0.01
The default edge representation are constructed by concatenating the following values, and the feature length is 14.
Bond type: A one-hot vector of the bond type, “single”, “double”, “triple”, or “aromatic”.
Same ring: A one-hot vector of whether the atoms in the pair are in the same ring.
Conjugated: A one-hot vector of whether this bond is conjugated or not.
Stereo: A one-hot vector of the stereo configuration (0-5) of a bond.
If you want to know more details about features, please check the paper [1]_ and utilities in deepchem.utils.molecule_feature_utils.py.
Examples
>>> smiles = ["C1=CC=CN=C1", "C1CCC1"] >>> featurizer = DMPNNFeaturizer() >>> out = featurizer.featurize(smiles) >>> type(out[0]) <class 'deepchem.feat.graph_data.GraphData'> >>> out[0].num_nodes 6 >>> out[0].num_node_features 133 >>> out[0].node_features.shape (6, 133) >>> out[0].num_edge_features 14 >>> out[0].num_edges 12 >>> out[0].edge_features.shape (12, 14)
References
- 1
Kearnes, Steven, et al. “Molecular graph convolutions: moving beyond fingerprints.” Journal of computer-aided molecular design 30.8 (2016):595-608.
Note
This class requires RDKit to be installed.
- __init__(features_generators: Optional[List[str]] = None, is_adding_hs: bool = False, use_original_atom_ranks: bool = False)[source]¶
- Parameters
features_generator (List[str], default None) – List of global feature generators to be used.
is_adding_hs (bool, default False) – Whether to add Hs or not.
use_original_atom_ranks (bool, default False) – Whether to use original atom mapping or canonical atom mapping
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
GroverFeaturizer¶
- class GroverFeaturizer(features_generator: Optional[deepchem.feat.base_classes.MolecularFeaturizer] = None, bond_drop_rate: float = 0.0)[source]¶
Featurizer for GROVER Model
The Grover Featurizer is used to compute features suitable for grover model. It accepts an rdkit molecule of type rdkit.Chem.rdchem.Mol or a SMILES string as input and computes the following sets of features:
a molecular graph from the input molecule
functional groups which are used only during pretraining
additional features which can only be used during finetuning
- Parameters
additional_featurizer (dc.feat.Featurizer) – Given a molecular dataset, it is possible to extract additional molecular features in order
can (to train and finetune from the existing pretrained model. The additional_featurizer) –
molecule. (be used to generate additional features for the) –
References
- 1
Rong, Yu, et al. “Self-supervised graph transformer on large-scale molecular data.” NeurIPS, 2020
Examples
>>> import deepchem as dc >>> from deepchem.feat import GroverFeaturizer >>> feat = GroverFeaturizer(features_generator = dc.feat.CircularFingerprint()) >>> out = feat.featurize('CCC')
Note
This class requires RDKit to be installed.
- __init__(features_generator: Optional[deepchem.feat.base_classes.MolecularFeaturizer] = None, bond_drop_rate: float = 0.0)[source]¶
- Parameters
use_original_atoms_order (bool, default False) – Whether to use original atom ordering or canonical ordering (default)
RDKitConformerFeaturizer¶
- class RDKitConformerFeaturizer(num_conformers: int = 1, rmsd_cutoff: float = 2)[source]¶
A featurizer that featurizes an RDKit mol object as a GraphData object with 3D coordinates. The 3D coordinates are represented in the node_pos_features attribute of the GraphData object of shape [num_atoms * num_conformers, 3].
The ETKDGv2 algorithm is used to generate 3D coordinates for the molecule. The RDKit source for this algorithm can be found in RDkit/Code/GraphMol/DistGeomHelpers/Embedder.cpp The documentation can be found here: https://rdkit.org/docs/source/rdkit.Chem.rdDistGeom.html#rdkit.Chem.rdDistGeom.ETKDGv2
This featurization requires RDKit.
Examples
>>> from deepchem.feat.molecule_featurizers.conformer_featurizer import RDKitConformerFeaturizer >>> from deepchem.feat.graph_data import BatchGraphData >>> import numpy as np >>> featurizer = RDKitConformerFeaturizer(num_conformers=2) >>> molecule = "CCO" >>> features_list = featurizer.featurize([molecule]) >>> batched_feats = BatchGraphData(np.concatenate(features_list).ravel()) >>> print(batched_feats.node_pos_features.shape) (18, 3)
- __init__(num_conformers: int = 1, rmsd_cutoff: float = 2)[source]¶
Initialize the RDKitConformerFeaturizer with the given parameters.
- Parameters
num_conformers (int, optional, default=1) – The number of conformers to generate for each molecule.
rmsd_cutoff (float, optional, default=2) – The root-mean-square deviation (RMSD) cutoff value. Conformers with an RMSD greater than this value will be discarded.
Utilities¶
Here are some constants that are used by the graph convolutional featurizers for molecules.
- class GraphConvConstants[source]¶
This class defines a collection of constants which are useful for graph convolutions on molecules.
- possible_atom_list = ['C', 'N', 'O', 'S', 'F', 'P', 'Cl', 'Mg', 'Na', 'Br', 'Fe', 'Ca', 'Cu', 'Mc', 'Pd', 'Pb', 'K', 'I', 'Al', 'Ni', 'Mn'][source]¶
Allowed Numbers of Hydrogens
- possible_formal_charge_list = [-3, -2, -1, 0, 1, 2, 3][source]¶
This is a placeholder for documentation. These will be replaced with corresponding values of the rdkit HybridizationType
- possible_hybridization_list = ['SP', 'SP2', 'SP3', 'SP3D', 'SP3D2'][source]¶
Allowed number of radical electrons.
- reference_lists = [['C', 'N', 'O', 'S', 'F', 'P', 'Cl', 'Mg', 'Na', 'Br', 'Fe', 'Ca', 'Cu', 'Mc', 'Pd', 'Pb', 'K', 'I', 'Al', 'Ni', 'Mn'], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4, 5, 6], [-3, -2, -1, 0, 1, 2, 3], [0, 1, 2], ['SP', 'SP2', 'SP3', 'SP3D', 'SP3D2'], ['R', 'S']][source]¶
The number of different values that can be taken. See get_intervals()
- intervals = [1, 6, 48, 384, 1536, 9216, 27648][source]¶
Possible stereochemistry. We use E-Z notation for stereochemistry https://en.wikipedia.org/wiki/E%E2%80%93Z_notation
There are a number of helper methods used by the graph convolutional classes which we document here.
- one_of_k_encoding(x, allowable_set)[source]¶
Encodes elements of a provided set as integers.
- Parameters
x (object) – Must be present in allowable_set.
allowable_set (list) – List of allowable quantities.
Example
>>> import deepchem as dc >>> dc.feat.graph_features.one_of_k_encoding("a", ["a", "b", "c"]) [True, False, False]
- Raises
ValueError –
- one_of_k_encoding_unk(x, allowable_set)[source]¶
Maps inputs not in the allowable set to the last element.
Unlike one_of_k_encoding, if x is not in allowable_set, this method pretends that x is the last element of allowable_set.
- Parameters
x (object) – Must be present in allowable_set.
allowable_set (list) – List of allowable quantities.
Examples
>>> dc.feat.graph_features.one_of_k_encoding_unk("s", ["a", "b", "c"]) [False, False, True]
- get_intervals(l)[source]¶
For list of lists, gets the cumulative products of the lengths
Note that we add 1 to the lengths of all lists (to avoid an empty list propagating a 0).
- Parameters
l (list of lists) – Returns the cumulative product of these lengths.
Examples
>>> dc.feat.graph_features.get_intervals([[1], [1, 2], [1, 2, 3]]) [1, 3, 12]
>>> dc.feat.graph_features.get_intervals([[1], [], [1, 2], [1, 2, 3]]) [1, 1, 3, 12]
- safe_index(l, e)[source]¶
Gets the index of e in l, providing an index of len(l) if not found
- Parameters
l (list) – List of values
e (object) – Object to check whether e is in l
Examples
>>> dc.feat.graph_features.safe_index([1, 2, 3], 1) 0 >>> dc.feat.graph_features.safe_index([1, 2, 3], 7) 3
- get_feature_list(atom)[source]¶
Returns a list of possible features for this atom.
- Parameters
atom (RDKit.Chem.rdchem.Atom) – Atom to get features for
Examples
>>> from rdkit import Chem >>> mol = Chem.MolFromSmiles("C") >>> atom = mol.GetAtoms()[0] >>> features = dc.feat.graph_features.get_feature_list(atom) >>> type(features) <class 'list'> >>> len(features) 6
Note
This method requires RDKit to be installed.
- Returns
features – List of length 6. The i-th value in this list provides the index of the atom in the corresponding feature value list. The 6 feature values lists for this function are [GraphConvConstants.possible_atom_list, GraphConvConstants.possible_numH_list, GraphConvConstants.possible_valence_list, GraphConvConstants.possible_formal_charge_list, GraphConvConstants.possible_num_radical_e_list].
- Return type
list
- features_to_id(features, intervals)[source]¶
Convert list of features into index using spacings provided in intervals
- Parameters
features (list) – List of features as returned by get_feature_list()
intervals (list) – List of intervals as returned by get_intervals()
- Returns
id – The index in a feature vector given by the given set of features.
- Return type
int
- id_to_features(id, intervals)[source]¶
Given an index in a feature vector, return the original set of features.
- Parameters
id (int) – The index in a feature vector given by the given set of features.
intervals (list) – List of intervals as returned by get_intervals()
- Returns
features – List of features as returned by get_feature_list()
- Return type
list
- atom_to_id(atom)[source]¶
Return a unique id corresponding to the atom type
- Parameters
atom (RDKit.Chem.rdchem.Atom) – Atom to convert to ids.
- Returns
id – The index in a feature vector given by the given set of features.
- Return type
int
This function helps compute distances between atoms from a given base atom.
- find_distance(a1: Any, num_atoms: int, bond_adj_list, max_distance=7) numpy.ndarray [source]¶
Computes distances from provided atom.
- Parameters
a1 (RDKit atom) – The source atom to compute distances from.
num_atoms (int) – The total number of atoms.
bond_adj_list (list of lists) – bond_adj_list[i] is a list of the atom indices that atom i shares a bond with. This list is symmetrical so if j in bond_adj_list[i] then i in bond_adj_list[j].
max_distance (int, optional (default 7)) – The max distance to search.
- Returns
distances – Of shape (num_atoms, max_distance). Provides a one-hot encoding of the distances. That is, distances[i] is a one-hot encoding of the distance from a1 to atom i.
- Return type
np.ndarray
This function is important and computes per-atom feature vectors used by graph convolutional featurizers.
- atom_features(atom, bool_id_feat=False, explicit_H=False, use_chirality=False)[source]¶
Helper method used to compute per-atom feature vectors.
Many different featurization methods compute per-atom features such as ConvMolFeaturizer, WeaveFeaturizer. This method computes such features.
- Parameters
atom (RDKit.Chem.rdchem.Atom) – Atom to compute features on.
bool_id_feat (bool, optional) – Return an array of unique identifiers corresponding to atom type.
explicit_H (bool, optional) – If true, model hydrogens explicitly
use_chirality (bool, optional) – If true, use chirality information.
- Returns
features – An array of per-atom features.
- Return type
np.ndarray
Examples
>>> from rdkit import Chem >>> mol = Chem.MolFromSmiles('CCC') >>> atom = mol.GetAtoms()[0] >>> features = dc.feat.graph_features.atom_features(atom) >>> type(features) <class 'numpy.ndarray'> >>> features.shape (75,)
This function computes the bond features used by graph convolutional featurizers.
- bond_features(bond, use_chirality=False, use_extended_chirality=False)[source]¶
Helper method used to compute bond feature vectors.
Many different featurization methods compute bond features such as WeaveFeaturizer. This method computes such features.
- Parameters
bond (rdkit.Chem.rdchem.Bond) – Bond to compute features on.
use_chirality (bool, optional) – If true, use chirality information.
use_extended_chirality (bool, optional) – If true, use chirality information with upto 6 different types.
Note
This method requires RDKit to be installed.
- Returns
bond_feats (np.ndarray) – Array of bond features. This is a 1-D array of length 6 if use_chirality is False else of length 10 with chirality encoded.
bond_feats (Sequence[Union[bool, int, float]]) – List of bond features returned if use_extended_chirality is True.
Examples
>>> from rdkit import Chem >>> mol = Chem.MolFromSmiles('CCC') >>> bond = mol.GetBonds()[0] >>> bond_features = dc.feat.graph_features.bond_features(bond) >>> type(bond_features) <class 'numpy.ndarray'> >>> bond_features.shape (6,)
Note
This method requires RDKit to be installed.
This function computes atom-atom features (for atom pairs which may not have bonds between them.)
- pair_features(mol: Any, bond_features_map: dict, bond_adj_list: List, bt_len: int = 6, graph_distance: bool = True, max_pair_distance: Optional[int] = None) Tuple[numpy.ndarray, numpy.ndarray] [source]¶
Helper method used to compute atom pair feature vectors.
Many different featurization methods compute atom pair features such as WeaveFeaturizer. Note that atom pair features could be for pairs of atoms which aren’t necessarily bonded to one another.
- Parameters
mol (RDKit Mol) – Molecule to compute features on.
bond_features_map (dict) – Dictionary that maps pairs of atom ids (say (2, 3) for a bond between atoms 2 and 3) to the features for the bond between them.
bond_adj_list (list of lists) – bond_adj_list[i] is a list of the atom indices that atom i shares a bond with . This list is symmetrical so if j in bond_adj_list[i] then i in bond_adj_list[j].
bt_len (int, optional (default 6)) – The number of different bond types to consider.
graph_distance (bool, optional (default True)) – If true, use graph distance between molecules. Else use euclidean distance. The specified mol must have a conformer. Atomic positions will be retrieved by calling mol.getConformer(0).
max_pair_distance (Optional[int], (default None)) – This value can be a positive integer or None. This parameter determines the maximum graph distance at which pair features are computed. For example, if max_pair_distance==2, then pair features are computed only for atoms at most graph distance 2 apart. If max_pair_distance is None, all pairs are considered (effectively infinite max_pair_distance)
Note
This method requires RDKit to be installed.
- Returns
features (np.ndarray) – Of shape (N_edges, bt_len + max_distance + 1). This is the array of pairwise features for all atom pairs, where N_edges is the number of edges within max_pair_distance of one another in this molecules.
pair_edges (np.ndarray) – Of shape (2, num_pairs) where num_pairs is the total number of pairs within max_pair_distance of one another.
MACCSKeysFingerprint¶
- class MACCSKeysFingerprint[source]¶
MACCS Keys Fingerprint.
The MACCS (Molecular ACCess System) keys are one of the most commonly used structural keys. Please confirm the details in [1]_, [2]_.
Examples
>>> import deepchem as dc >>> smiles = 'CC(=O)OC1=CC=CC=C1C(=O)O' >>> featurizer = dc.feat.MACCSKeysFingerprint() >>> features = featurizer.featurize([smiles]) >>> type(features[0]) <class 'numpy.ndarray'> >>> features[0].shape (167,)
References
- 1
Durant, Joseph L., et al. “Reoptimization of MDL keys for use in drug discovery.” Journal of chemical information and computer sciences 42.6 (2002): 1273-1280.
- 2
https://github.com/rdkit/rdkit/blob/master/rdkit/Chem/MACCSkeys.py
Note
This class requires RDKit to be installed.
MATFeaturizer¶
- class MATFeaturizer[source]¶
This class is a featurizer for the Molecule Attention Transformer [1]_. The returned value is a numpy array which consists of molecular graph descriptions:
Node Features
Adjacency Matrix
Distance Matrix
References
- 1
Lukasz Maziarka et al. “Molecule Attention Transformer`<https://arxiv.org/abs/2002.08264>`”
Examples
>>> import deepchem as dc >>> feat = dc.feat.MATFeaturizer() >>> out = feat.featurize("CCC")
Note
This class requires RDKit to be installed.
- __init__()[source]¶
- Parameters
use_original_atoms_order (bool, default False) – Whether to use original atom ordering or canonical ordering (default)
- construct_mol(mol: Any) Any [source]¶
Processes an input RDKitMol further to be able to extract id-specific Conformers from it using mol.GetConformer().
- Parameters
mol (RDKitMol) – RDKit Mol object.
- Returns
mol – A processed RDKitMol object which is embedded, UFF Optimized and has Hydrogen atoms removed. If the former conditions are not met and there is a value error, then 2D Coordinates are computed instead.
- Return type
RDKitMol
- atom_features(atom: Any) numpy.ndarray [source]¶
Deepchem already contains an atom_features function, however we are defining a new one here due to the need to handle features specific to MAT. Since we need new features like Atom GetNeighbors and IsInRing, and the number of features required for MAT is a fraction of what the Deepchem atom_features function computes, we can speed up computation by defining a custom function.
- Parameters
atom (RDKitAtom) – RDKit Atom object.
- Returns
Numpy array containing atom features.
- Return type
ndarray
- construct_node_features_matrix(mol: Any) numpy.ndarray [source]¶
This function constructs a matrix of atom features for all atoms in a given molecule using the atom_features function.
- Parameters
mol (RDKitMol) – RDKit Mol object.
- Returns
Atom_features – Numpy array containing atom features.
- Return type
ndarray
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
CircularFingerprint¶
- class CircularFingerprint(radius: int = 2, size: int = 2048, chiral: bool = False, bonds: bool = True, features: bool = False, sparse: bool = False, smiles: bool = False, is_counts_based: bool = False)[source]¶
Circular (Morgan) fingerprints.
Extended Connectivity Circular Fingerprints compute a bag-of-words style representation of a molecule by breaking it into local neighborhoods and hashing into a bit vector of the specified size. It is used specifically for structure-activity modelling. See [1]_ for more details.
References
- 1
Rogers, David, and Mathew Hahn. “Extended-connectivity fingerprints.” Journal of chemical information and modeling 50.5 (2010): 742-754.
Note
This class requires RDKit to be installed.
Examples
>>> import deepchem as dc >>> from rdkit import Chem >>> smiles = ['C1=CC=CC=C1'] >>> # Example 1: (size = 2048, radius = 4) >>> featurizer = dc.feat.CircularFingerprint(size=2048, radius=4) >>> features = featurizer.featurize(smiles) >>> type(features[0]) <class 'numpy.ndarray'> >>> features[0].shape (2048,)
>>> # Example 2: (size = 2048, radius = 4, sparse = True, smiles = True) >>> featurizer = dc.feat.CircularFingerprint(size=2048, radius=8, ... sparse=True, smiles=True) >>> features = featurizer.featurize(smiles) >>> type(features[0]) # dict containing fingerprints <class 'dict'>
- __init__(radius: int = 2, size: int = 2048, chiral: bool = False, bonds: bool = True, features: bool = False, sparse: bool = False, smiles: bool = False, is_counts_based: bool = False)[source]¶
- Parameters
radius (int, optional (default 2)) – Fingerprint radius.
size (int, optional (default 2048)) – Length of generated bit vector.
chiral (bool, optional (default False)) – Whether to consider chirality in fingerprint generation.
bonds (bool, optional (default True)) – Whether to consider bond order in fingerprint generation.
features (bool, optional (default False)) – Whether to use feature information instead of atom information; see RDKit docs for more info.
sparse (bool, optional (default False)) – Whether to return a dict for each molecule containing the sparse fingerprint.
smiles (bool, optional (default False)) – Whether to calculate SMILES strings for fragment IDs (only applicable when calculating sparse fingerprints).
is_counts_based (bool, optional (default False)) – Whether to generates a counts-based fingerprint.
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
PubChemFingerprint¶
- class PubChemFingerprint[source]¶
PubChem Fingerprint.
The PubChem fingerprint is a 881 bit structural key, which is used by PubChem for similarity searching. Please confirm the details in [1]_.
References
Note
This class requires RDKit and PubChemPy to be installed. PubChemPy use REST API to get the fingerprint, so you need the internet access.
Examples
>>> import deepchem as dc >>> smiles = ['CCC'] >>> featurizer = dc.feat.PubChemFingerprint() >>> features = featurizer.featurize(smiles) >>> type(features[0]) <class 'numpy.ndarray'> >>> features[0].shape (881,)
Mol2VecFingerprint¶
- class Mol2VecFingerprint(pretrain_model_path: Optional[str] = None, radius: int = 1, unseen: str = 'UNK')[source]¶
Mol2Vec fingerprints.
This class convert molecules to vector representations by using Mol2Vec. Mol2Vec is an unsupervised machine learning approach to learn vector representations of molecular substructures and the algorithm is based on Word2Vec, which is one of the most popular technique to learn word embeddings using neural network in NLP. Please see the details from [1]_.
The Mol2Vec requires the pretrained model, so we use the model which is put on the mol2vec github repository [2]_. The default model was trained on 20 million compounds downloaded from ZINC using the following paramters.
radius 1
UNK to replace all identifiers that appear less than 4 times
skip-gram and window size of 10
embeddings size 300
References
- 1
Jaeger, Sabrina, Simone Fulle, and Samo Turk. “Mol2vec: unsupervised machine learning approach with chemical intuition.” Journal of chemical information and modeling 58.1 (2018): 27-35.
- 2
Note
This class requires mol2vec to be installed.
Examples
>>> import deepchem as dc >>> from rdkit import Chem >>> smiles = ['CCC'] >>> featurizer = dc.feat.Mol2VecFingerprint() >>> features = featurizer.featurize(smiles) >>> type(features) <class 'numpy.ndarray'> >>> features[0].shape (300,)
- __init__(pretrain_model_path: Optional[str] = None, radius: int = 1, unseen: str = 'UNK')[source]¶
- Parameters
pretrain_file (str, optional) – The path for pretrained model. If this value is None, we use the model which is put on github repository (https://github.com/samoturk/mol2vec/tree/master/examples/models). The model is trained on 20 million compounds downloaded from ZINC.
radius (int, optional (default 1)) – The fingerprint radius. The default value was used to train the model which is put on github repository.
unseen (str, optional (default 'UNK')) – The string to used to replace uncommon words/identifiers while training.
- sentences2vec(sentences: list, model, unseen=None) numpy.ndarray [source]¶
Generate vectors for each sentence (list) in a list of sentences. Vector is simply a sum of vectors for individual words.
- Parameters
sentences (list, array) – List with sentences
model (word2vec.Word2Vec) – Gensim word2vec model
unseen (None, str) – Keyword for unseen words. If None, those words are skipped. https://stats.stackexchange.com/questions/163005/how-to-set-the-dictionary-for-text-analysis-using-neural-networks/163032#163032
- Return type
np.array
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
RDKitDescriptors¶
- class RDKitDescriptors(descriptors: List[str] = [], is_normalized: bool = False, use_fragment: bool = True, ipc_avg: bool = True, use_bcut2d: bool = True, labels_only: bool = False)[source]¶
RDKit descriptors.
This class computes a list of chemical descriptors like molecular weight, number of valence electrons, maximum and minimum partial charge, etc using RDKit.
This class can also compute normalized descriptors, if required. (The implementation for normalization is based on RDKit2DNormalized() method in ‘descriptastorus’ library.)
When the is_normalized option is set as True, descriptor values are normalized across the sample by fitting a cumulative density function. CDFs were used as opposed to simpler scaling algorithms mainly because CDFs have the useful property that ‘each value has the same meaning: the percentage of the population observed below the raw feature value.’
Warning: Currently, the normalizing cdf parameters are not available for BCUT2D descriptors. (BCUT2D_MWHI, BCUT2D_MWLOW, BCUT2D_CHGHI, BCUT2D_CHGLO, BCUT2D_LOGPHI, BCUT2D_LOGPLOW, BCUT2D_MRHI, BCUT2D_MRLOW)
Note
This class requires RDKit to be installed.
Examples
>>> import deepchem as dc >>> smiles = ['CC(=O)OC1=CC=CC=C1C(=O)O'] >>> featurizer = dc.feat.RDKitDescriptors() >>> features = featurizer.featurize(smiles) >>> type(features[0]) <class 'numpy.ndarray'> >>> features[0].shape (208,)
- __init__(descriptors: List[str] = [], is_normalized: bool = False, use_fragment: bool = True, ipc_avg: bool = True, use_bcut2d: bool = True, labels_only: bool = False)[source]¶
Initialize this featurizer.
- Parameters
descriptors_list (List[str] (default None)) – List of RDKit descriptors to compute properties. When None, computes values
arguments. (for descriptors which are chosen based on options set in other) –
use_fragment (bool, optional (default True)) – If True, the return value includes the fragment binary descriptors like ‘fr_XXX’.
ipc_avg (bool, optional (default True)) – If True, the IPC descriptor calculates with avg=True option. Please see this issue: https://github.com/rdkit/rdkit/issues/1527.
is_normalized (bool, optional (default False)) – If True, the return value contains normalized features.
use_bcut2d (bool, optional (default True)) – If True, the return value includes the descriptors like ‘BCUT2D_XXX’.
labels_only (bool, optional (default False)) – Returns only the presence or absence of a group.
Notes
- If both labels_only and is_normalized are True, then is_normalized takes
precendence and labels_only will not be applied.
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
MordredDescriptors¶
- class MordredDescriptors(ignore_3D: bool = True)[source]¶
Mordred descriptors.
This class computes a list of chemical descriptors using Mordred. Please see the details about all descriptors from [1]_, [2]_.
References
- 1
Moriwaki, Hirotomo, et al. “Mordred: a molecular descriptor calculator.” Journal of cheminformatics 10.1 (2018): 4.
- 2
http://mordred-descriptor.github.io/documentation/master/descriptors.html
Note
This class requires Mordred to be installed.
Examples
>>> import deepchem as dc >>> smiles = ['CC(=O)OC1=CC=CC=C1C(=O)O'] >>> featurizer = dc.feat.MordredDescriptors(ignore_3D=True) >>> features = featurizer.featurize(smiles) >>> type(features[0]) <class 'numpy.ndarray'> >>> features[0].shape (1613,)
- __init__(ignore_3D: bool = True)[source]¶
- Parameters
ignore_3D (bool, optional (default True)) – Whether to use 3D information or not.
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
CoulombMatrix¶
- class CoulombMatrix(max_atoms: int, remove_hydrogens: bool = False, randomize: bool = False, upper_tri: bool = False, n_samples: int = 1, seed: Optional[int] = None)[source]¶
Calculate Coulomb matrices for molecules.
Coulomb matrices provide a representation of the electronic structure of a molecule. For a molecule with N atoms, the Coulomb matrix is a N X N matrix where each element gives the strength of the electrostatic interaction between two atoms. The method is described in more detail in [1]_.
Examples
>>> import deepchem as dc >>> featurizers = dc.feat.CoulombMatrix(max_atoms=23) >>> input_file = 'deepchem/feat/tests/data/water.sdf' # really backed by water.sdf.csv >>> tasks = ["atomization_energy"] >>> loader = dc.data.SDFLoader(tasks, featurizer=featurizers) >>> dataset = loader.create_dataset(input_file)
References
- 1
Montavon, Grégoire, et al. “Learning invariant representations of molecules for atomization energy prediction.” Advances in neural information processing systems. 2012.
Note
This class requires RDKit to be installed.
- __init__(max_atoms: int, remove_hydrogens: bool = False, randomize: bool = False, upper_tri: bool = False, n_samples: int = 1, seed: Optional[int] = None)[source]¶
Initialize this featurizer.
- Parameters
max_atoms (int) – The maximum number of atoms expected for molecules this featurizer will process.
remove_hydrogens (bool, optional (default False)) – If True, remove hydrogens before processing them.
randomize (bool, optional (default False)) – If True, use method randomize_coulomb_matrices to randomize Coulomb matrices.
upper_tri (bool, optional (default False)) – Generate only upper triangle part of Coulomb matrices.
n_samples (int, optional (default 1)) – If randomize is set to True, the number of random samples to draw.
seed (int, optional (default None)) – Random seed to use.
- coulomb_matrix(mol: Any) numpy.ndarray [source]¶
Generate Coulomb matrices for each conformer of the given molecule.
- Parameters
mol (rdkit.Chem.rdchem.Mol) – RDKit Mol object
- Returns
The coulomb matrices of the given molecule
- Return type
np.ndarray
- randomize_coulomb_matrix(m: numpy.ndarray) List[numpy.ndarray] [source]¶
Randomize a Coulomb matrix as decribed in [1]_:
Compute row norms for M in a vector row_norms.
- Sample a zero-mean unit-variance noise vector e with dimension
equal to row_norms.
- Permute the rows and columns of M with the permutation that
sorts row_norms + e.
- Parameters
m (np.ndarray) – Coulomb matrix.
- Returns
List of the random coulomb matrix
- Return type
List[np.ndarray]
References
- 1
Montavon et al., New Journal of Physics, 15, (2013), 095003
- static get_interatomic_distances(conf: Any) numpy.ndarray [source]¶
Get interatomic distances for atoms in a molecular conformer.
- Parameters
conf (rdkit.Chem.rdchem.Conformer) – Molecule conformer.
- Returns
The distances matrix for all atoms in a molecule
- Return type
np.ndarray
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
CoulombMatrixEig¶
- class CoulombMatrixEig(max_atoms: int, remove_hydrogens: bool = False, randomize: bool = False, n_samples: int = 1, seed: Optional[int] = None)[source]¶
Calculate the eigenvalues of Coulomb matrices for molecules.
This featurizer computes the eigenvalues of the Coulomb matrices for provided molecules. Coulomb matrices are described in [1]_.
Examples
>>> import deepchem as dc >>> featurizers = dc.feat.CoulombMatrixEig(max_atoms=23) >>> input_file = 'deepchem/feat/tests/data/water.sdf' # really backed by water.sdf.csv >>> tasks = ["atomization_energy"] >>> loader = dc.data.SDFLoader(tasks, featurizer=featurizers) >>> dataset = loader.create_dataset(input_file)
References
- 1
Montavon, Grégoire, et al. “Learning invariant representations of molecules for atomization energy prediction.” Advances in neural information processing systems. 2012.
- __init__(max_atoms: int, remove_hydrogens: bool = False, randomize: bool = False, n_samples: int = 1, seed: Optional[int] = None)[source]¶
Initialize this featurizer.
- Parameters
max_atoms (int) – The maximum number of atoms expected for molecules this featurizer will process.
remove_hydrogens (bool, optional (default False)) – If True, remove hydrogens before processing them.
randomize (bool, optional (default False)) – If True, use method randomize_coulomb_matrices to randomize Coulomb matrices.
n_samples (int, optional (default 1)) – If randomize is set to True, the number of random samples to draw.
seed (int, optional (default None)) – Random seed to use.
- coulomb_matrix(mol: Any) numpy.ndarray [source]¶
Generate Coulomb matrices for each conformer of the given molecule.
- Parameters
mol (rdkit.Chem.rdchem.Mol) – RDKit Mol object
- Returns
The coulomb matrices of the given molecule
- Return type
np.ndarray
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
- static get_interatomic_distances(conf: Any) numpy.ndarray [source]¶
Get interatomic distances for atoms in a molecular conformer.
- Parameters
conf (rdkit.Chem.rdchem.Conformer) – Molecule conformer.
- Returns
The distances matrix for all atoms in a molecule
- Return type
np.ndarray
- randomize_coulomb_matrix(m: numpy.ndarray) List[numpy.ndarray] [source]¶
Randomize a Coulomb matrix as decribed in [1]_:
Compute row norms for M in a vector row_norms.
- Sample a zero-mean unit-variance noise vector e with dimension
equal to row_norms.
- Permute the rows and columns of M with the permutation that
sorts row_norms + e.
- Parameters
m (np.ndarray) – Coulomb matrix.
- Returns
List of the random coulomb matrix
- Return type
List[np.ndarray]
References
- 1
Montavon et al., New Journal of Physics, 15, (2013), 095003
AtomCoordinates¶
- class AtomicCoordinates(use_bohr: bool = False)[source]¶
Calculate atomic coordinates.
Examples
>>> import deepchem as dc >>> from rdkit import Chem >>> mol = Chem.MolFromSmiles('C1C=CC=CC=1') >>> n_atoms = len(mol.GetAtoms()) >>> n_atoms 6 >>> featurizer = dc.feat.AtomicCoordinates(use_bohr=False) >>> features = featurizer.featurize([mol]) >>> type(features[0]) <class 'numpy.ndarray'> >>> features[0].shape # (n_atoms, 3) (6, 3)
Note
This class requires RDKit to be installed.
- __init__(use_bohr: bool = False)[source]¶
- Parameters
use_bohr (bool, optional (default False)) – Whether to use bohr or angstrom as a coordinate unit.
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
BPSymmetryFunctionInput¶
- class BPSymmetryFunctionInput(max_atoms: int)[source]¶
Calculate symmetry function for each atom in the molecules
This method is described in [1]_.
Examples
>>> import deepchem as dc >>> smiles = ['C1C=CC=CC=1'] >>> featurizer = dc.feat.BPSymmetryFunctionInput(max_atoms=10) >>> features = featurizer.featurize(smiles) >>> type(features[0]) <class 'numpy.ndarray'> >>> features[0].shape # (max_atoms, 4) (10, 4)
References
- 1
Behler, Jörg, and Michele Parrinello. “Generalized neural-network representation of high-dimensional potential-energy surfaces.” Physical review letters 98.14 (2007): 146401.
Note
This class requires RDKit to be installed.
- __init__(max_atoms: int)[source]¶
Initialize this featurizer.
- Parameters
max_atoms (int) – The maximum number of atoms expected for molecules this featurizer will process.
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
SmilesToSeq¶
- class SmilesToSeq(char_to_idx: Dict[str, int], max_len: int = 250, pad_len: int = 10)[source]¶
SmilesToSeq Featurizer takes a SMILES string, and turns it into a sequence. Details taken from [1]_.
SMILES strings smaller than a specified max length (max_len) are padded using the PAD token while those larger than the max length are not considered. Based on the paper, there is also the option to add extra padding (pad_len) on both sides of the string after length normalization. Using a character to index (char_to_idx) mapping, the SMILES characters are turned into indices and the resulting sequence of indices serves as the input for an embedding layer.
References
- 1
Goh, Garrett B., et al. “Using rule-based labels for weak supervised learning: a ChemNet for transferable chemical property prediction.” Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018.
Note
This class requires RDKit to be installed.
- __init__(char_to_idx: Dict[str, int], max_len: int = 250, pad_len: int = 10)[source]¶
Initialize this class.
- Parameters
char_to_idx (Dict) – Dictionary containing character to index mappings for unique characters
max_len (int, default 250) – Maximum allowed length of the SMILES string.
pad_len (int, default 10) – Amount of padding to add on either side of the SMILES seq
- to_seq(smile: List[str]) numpy.ndarray [source]¶
Turns list of smiles characters into array of indices
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
SmilesToImage¶
- class SmilesToImage(img_size: int = 80, res: float = 0.5, max_len: int = 250, img_spec: str = 'std')[source]¶
Convert SMILES string to an image.
SmilesToImage Featurizer takes a SMILES string, and turns it into an image. Details taken from [1]_.
The default size of for the image is 80 x 80. Two image modes are currently supported - std & engd. std is the gray scale specification, with atomic numbers as pixel values for atom positions and a constant value of 2 for bond positions. engd is a 4-channel specification, which uses atom properties like hybridization, valency, charges in addition to atomic number. Bond type is also used for the bonds.
The coordinates of all atoms are computed, and lines are drawn between atoms to indicate bonds. For the respective channels, the atom and bond positions are set to the property values as mentioned in the paper.
Examples
>>> import deepchem as dc >>> smiles = ['CC(=O)OC1=CC=CC=C1C(=O)O'] >>> featurizer = dc.feat.SmilesToImage(img_size=80, img_spec='std') >>> images = featurizer.featurize(smiles) >>> type (images[0]) <class 'numpy.ndarray'> >>> images[0].shape # (img_size, img_size, 1) (80, 80, 1)
References
- 1
Goh, Garrett B., et al. “Using rule-based labels for weak supervised learning: a ChemNet for transferable chemical property prediction.” Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018.
Note
This class requires RDKit to be installed.
- __init__(img_size: int = 80, res: float = 0.5, max_len: int = 250, img_spec: str = 'std')[source]¶
- Parameters
img_size (int, default 80) – Size of the image tensor
res (float, default 0.5) – Displays the resolution of each pixel in Angstrom
max_len (int, default 250) – Maximum allowed length of SMILES string
img_spec (str, default std) – Indicates the channel organization of the image tensor
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
OneHotFeaturizer¶
- class OneHotFeaturizer(charset: List[str] = ['#', ')', '(', '+', '-', '/', '1', '3', '2', '5', '4', '7', '6', '8', '=', '@', 'C', 'B', 'F', 'I', 'H', 'O', 'N', 'S', '[', ']', '\\', 'c', 'l', 'o', 'n', 'p', 's', 'r'], max_length: Optional[int] = 100)[source]¶
Encodes any arbitrary string or molecule as a one-hot array.
This featurizer encodes the characters within any given string as a one-hot array. It also works with RDKit molecules: it can convert RDKit molecules to SMILES strings and then one-hot encode the characters in said strings.
Standalone Usage:
>>> import deepchem as dc >>> featurizer = dc.feat.OneHotFeaturizer() >>> smiles = ['CCC'] >>> encodings = featurizer.featurize(smiles) >>> type(encodings[0]) <class 'numpy.ndarray'> >>> encodings[0].shape (100, 35) >>> featurizer.untransform(encodings[0]) 'CCC'
Note
This class needs RDKit to be installed in order to accept RDKit molecules as inputs.
It does not need RDKit to be installed to work with arbitrary strings.
- __init__(charset: List[str] = ['#', ')', '(', '+', '-', '/', '1', '3', '2', '5', '4', '7', '6', '8', '=', '@', 'C', 'B', 'F', 'I', 'H', 'O', 'N', 'S', '[', ']', '\\', 'c', 'l', 'o', 'n', 'p', 's', 'r'], max_length: Optional[int] = 100)[source]¶
Initialize featurizer.
- Parameters
charset (List[str] (default ZINC_CHARSET)) – A list of strings, where each string is length 1 and unique.
max_length (Optional[int], optional (default 100)) – The max length for string. If the length of string is shorter than max_length, the string is padded using space.
None (If max_length is) –
length (no padding is performed and arbitrary) –
allowed. (strings are) –
- featurize(datapoints: Iterable[Any], log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Featurize strings or mols.
- Parameters
datapoints (list) – A list of either strings (str or numpy.str_) or RDKit molecules.
log_every_n (int, optional (default 1000)) – How many elements are featurized every time a featurization is logged.
- pad_smile(smiles: str) str [source]¶
Pad SMILES string to self.pad_length
- Parameters
smiles (str) – The SMILES string to be padded.
- Returns
SMILES string space padded to self.pad_length
- Return type
str
SparseMatrixOneHotFeaturizer¶
- class SparseMatrixOneHotFeaturizer(charset: List[str] = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y', 'X', 'Z', 'B', 'U', 'O'])[source]¶
Encodes any arbitrary string as a one-hot array.
This featurizer uses the sklearn OneHotEncoder to create sparse matrix representation of a one-hot array of any string. It is expected to be used in large datasets that produces memory overload using standard featurizer such as OneHotFeaturizer. For example: SwissprotDataset
Examples
>>> import deepchem as dc >>> featurizer = dc.feat.SparseMatrixOneHotFeaturizer() >>> sequence = "MMMQLA" >>> encodings = featurizer.featurize([sequence]) >>> encodings[0].shape (6, 25)
- __init__(charset: List[str] = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y', 'X', 'Z', 'B', 'U', 'O'])[source]¶
Initialize featurizer.
- Parameters
charset (List[str] (default code)) – A list of strings, where each string is length 1 and unique.
- featurize(datapoints: Iterable[Any], log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Featurize strings.
- Parameters
datapoints (list) – A list of either strings (str or numpy.str_)
log_every_n (int, optional (default 1000)) – How many elements are featurized every time a featurization is logged.
RawFeaturizer¶
- class RawFeaturizer(smiles: bool = False)[source]¶
Encodes a molecule as a SMILES string or RDKit mol.
This featurizer can be useful when you’re trying to transform a large collection of RDKit mol objects as Smiles strings, or alternatively as a “no-op” featurizer in your molecular pipeline.
Note
This class requires RDKit to be installed.
- __init__(smiles: bool = False)[source]¶
Initialize this featurizer.
- Parameters
smiles (bool, optional (default False)) – If True, encode this molecule as a SMILES string. Else as a RDKit mol.
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
SNAPFeaturizer¶
- class SNAPFeaturizer(use_original_atoms_order=False)[source]¶
This featurizer is based on the SNAP featurizer used in the paper [1].
Example
>>> smiles = ["CC(=O)C"] >>> featurizer = SNAPFeaturizer() >>> print(featurizer.featurize(smiles)) [GraphData(node_features=[4, 2], edge_index=[2, 6], edge_features=[6, 2])]
References
- 1
Hu, W. et al. Strategies for Pre-training Graph Neural Networks. Preprint at https://doi.org/10.48550/arXiv.1905.12265 (2020).
- __init__(use_original_atoms_order=False)[source]¶
- Parameters
use_original_atoms_order (bool, default False) – Whether to use original atom ordering or canonical ordering (default)
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
Molecular Complex Featurizers¶
These featurizers work with three dimensional molecular complexes.
RdkitGridFeaturizer¶
- class RdkitGridFeaturizer(nb_rotations=0, feature_types=None, ecfp_degree=2, ecfp_power=3, splif_power=3, box_width=16.0, voxel_width=1.0, flatten=False, verbose=True, sanitize=False, **kwargs)[source]¶
Featurizes protein-ligand complex using flat features or a 3D grid (in which each voxel is described with a vector of features).
- __init__(nb_rotations=0, feature_types=None, ecfp_degree=2, ecfp_power=3, splif_power=3, box_width=16.0, voxel_width=1.0, flatten=False, verbose=True, sanitize=False, **kwargs)[source]¶
- Parameters
nb_rotations (int, optional (default 0)) – Number of additional random rotations of a complex to generate.
feature_types (list, optional (default ['ecfp'])) –
- Types of features to calculate. Available types are
flat features -> ‘ecfp_ligand’, ‘ecfp_hashed’, ‘splif_hashed’, ‘hbond_count’ voxel features -> ‘ecfp’, ‘splif’, ‘sybyl’, ‘salt_bridge’, ‘charge’, ‘hbond’, ‘pi_stack, ‘cation_pi’
- There are also 3 predefined sets of features
’flat_combined’, ‘voxel_combined’, and ‘all_combined’.
Calculated features are concatenated and their order is preserved (features in predefined sets are in alphabetical order).
ecfp_degree (int, optional (default 2)) – ECFP radius.
ecfp_power (int, optional (default 3)) – Number of bits to store ECFP features (resulting vector will be 2^ecfp_power long)
splif_power (int, optional (default 3)) – Number of bits to store SPLIF features (resulting vector will be 2^splif_power long)
box_width (float, optional (default 16.0)) – Size of a box in which voxel features are calculated. Box is centered on a ligand centroid.
voxel_width (float, optional (default 1.0)) – Size of a 3D voxel in a grid.
flatten (bool, optional (defaul False)) – Indicate whether calculated features should be flattened. Output is always flattened if flat features are specified in feature_types.
verbose (bool, optional (defaul True)) – Verbolity for logging
sanitize (bool, optional (defaul False)) – If set to True molecules will be sanitized. Note that calculating some features (e.g. aromatic interactions) require sanitized molecules.
**kwargs (dict, optional) – Keyword arguments can be usaed to specify custom cutoffs and bins (see default values below).
bins (Default cutoffs and) –
------------------------ –
hbond_dist_bins ([(2.2, 2.5), (2.5, 3.2), (3.2, 4.0)]) –
hbond_angle_cutoffs ([5, 50, 90]) –
splif_contact_bins ([(0, 2.0), (2.0, 3.0), (3.0, 4.5)]) –
ecfp_cutoff (4.5) –
sybyl_cutoff (7.0) –
salt_bridges_cutoff (5.0) –
pi_stack_dist_cutoff (4.4) –
pi_stack_angle_cutoff (30.0) –
cation_pi_dist_cutoff (6.5) –
cation_pi_angle_cutoff (30.0) –
- featurize(datapoints: Optional[Iterable[Tuple[str, str]]] = None, log_every_n: int = 100, **kwargs) numpy.ndarray [source]¶
Calculate features for mol/protein complexes. :param datapoints: List of filenames (PDB, SDF, etc.) for ligand molecules and proteins.
Each element should be a tuple of the form (ligand_filename, protein_filename).
- Returns
features – Array of features
- Return type
np.ndarray
AtomicConvFeaturizer¶
- class AtomicConvFeaturizer(frag1_num_atoms, frag2_num_atoms, complex_num_atoms, max_num_neighbors, neighbor_cutoff, strip_hydrogens=True)[source]¶
This class computes the featurization that corresponds to AtomicConvModel.
This class computes featurizations needed for AtomicConvModel. Given two molecular structures, it computes a number of useful geometric features. In particular, for each molecule and the global complex, it computes a coordinates matrix of size (N_atoms, 3) where N_atoms is the number of atoms. It also computes a neighbor-list, a dictionary with N_atoms elements where neighbor-list[i] is a list of the atoms the i-th atom has as neighbors. In addition, it computes a z-matrix for the molecule which is an array of shape (N_atoms,) that contains the atomic number of that atom.
Since the featurization computes these three quantities for each of the two molecules and the complex, a total of 9 quantities are returned for each complex. Note that for efficiency, fragments of the molecules can be provided rather than the full molecules themselves.
- __init__(frag1_num_atoms, frag2_num_atoms, complex_num_atoms, max_num_neighbors, neighbor_cutoff, strip_hydrogens=True)[source]¶
- Parameters
frag1_num_atoms (int) – Maximum number of atoms in fragment 1.
frag2_num_atoms (int) – Maximum number of atoms in fragment 2.
complex_num_atoms (int) – Maximum number of atoms in complex of frag1/frag2 together.
max_num_neighbors (int) – Maximum number of atoms considered as neighbors.
neighbor_cutoff (float) – Maximum distance (angstroms) for two atoms to be considered as neighbors. If more than max_num_neighbors atoms fall within this cutoff, the closest max_num_neighbors will be used.
strip_hydrogens (bool (default True)) – Remove hydrogens before computing featurization.
- featurize(datapoints: Optional[Iterable[Tuple[str, str]]] = None, log_every_n: int = 100, **kwargs) numpy.ndarray [source]¶
Calculate features for mol/protein complexes. :param datapoints: List of filenames (PDB, SDF, etc.) for ligand molecules and proteins.
Each element should be a tuple of the form (ligand_filename, protein_filename).
- Returns
features – Array of features
- Return type
np.ndarray
Inorganic Crystal Featurizers¶
These featurizers work with datasets of inorganic crystals.
MaterialCompositionFeaturizer¶
Material Composition Featurizers are those that work with datasets of crystal compositions with periodic boundary conditions. For inorganic crystal structures, these featurizers operate on chemical compositions (e.g. “MoS2”). They should be applied on systems that have periodic boundary conditions. Composition featurizers are not designed to work with molecules.
ElementPropertyFingerprint¶
- class ElementPropertyFingerprint(data_source: str = 'matminer')[source]¶
Fingerprint of elemental properties from composition.
Based on the data source chosen, returns properties and statistics (min, max, range, mean, standard deviation, mode) for a compound based on elemental stoichiometry. E.g., the average electronegativity of atoms in a crystal structure. The chemical fingerprint is a vector of these statistics. For a full list of properties and statistics, see
matminer.featurizers.composition.ElementProperty(data_source).feature_labels()
.This featurizer requires the optional dependencies pymatgen and matminer. It may be useful when only crystal compositions are available (and not 3D coordinates).
See references [1]_, [2]_, 3, 4 for more details.
References
- 1
MagPie data: Ward, L. et al. npj Comput Mater 2, 16028 (2016). https://doi.org/10.1038/npjcompumats.2016.28
- 2
Deml data: Deml, A. et al. Physical Review B 93, 085142 (2016). 10.1103/PhysRevB.93.085142
- 3
Matminer: Ward, L. et al. Comput. Mater. Sci. 152, 60-69 (2018).
- 4
Pymatgen: Ong, S.P. et al. Comput. Mater. Sci. 68, 314-319 (2013).
Examples
>>> import deepchem as dc >>> import pymatgen as mg >>> comp = mg.core.Composition("Fe2O3") >>> featurizer = dc.feat.ElementPropertyFingerprint() >>> features = featurizer.featurize([comp]) >>> type(features[0]) <class 'numpy.ndarray'> >>> features[0].shape (65,)
Note
This class requires matminer and Pymatgen to be installed. NaN feature values are automatically converted to 0 by this featurizer.
- __init__(data_source: str = 'matminer')[source]¶
- Parameters
data_source (str of "matminer", "magpie" or "deml" (default "matminer")) – Source for element property data.
- featurize(datapoints: Optional[Iterable[str]] = None, log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for crystal compositions.
- Parameters
datapoints (Iterable[str]) – Iterable sequence of composition strings, e.g. “MoS2”.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of compositions.
- Return type
np.ndarray
ElemNetFeaturizer¶
- class ElemNetFeaturizer[source]¶
Fixed size vector of length 86 containing raw fractional elemental compositions in the compound. The 86 chosen elements are based on the original implementation at https://github.com/NU-CUCIS/ElemNet.
Returns a vector containing fractional compositions of each element in the compound.
References
- 1
Jha, D., Ward, L., Paul, A. et al. Sci Rep 8, 17593 (2018). https://doi.org/10.1038/s41598-018-35934-y
Examples
>>> import deepchem as dc >>> comp = "Fe2O3" >>> featurizer = dc.feat.ElemNetFeaturizer() >>> features = featurizer.featurize([comp]) >>> type(features[0]) <class 'numpy.ndarray'> >>> features[0].shape (86,) >>> round(sum(features[0])) 1
Note
This class requires Pymatgen to be installed.
- get_vector(comp: DefaultDict) Optional[numpy.ndarray] [source]¶
Converts a dictionary containing element names and corresponding compositional fractions into a vector of fractions.
- Parameters
comp (collections.defaultdict object) – Dictionary mapping element names to fractional compositions.
- Returns
fractions – Vector of fractional compositions of each element.
- Return type
np.ndarray
MaterialStructureFeaturizer¶
Material Structure Featurizers are those that work with datasets of crystals with periodic boundary conditions. For inorganic crystal structures, these featurizers operate on pymatgen.Structure objects, which include a lattice and 3D coordinates that specify a periodic crystal structure. They should be applied on systems that have periodic boundary conditions. Structure featurizers are not designed to work with molecules.
SineCoulombMatrix¶
- class SineCoulombMatrix(max_atoms: int = 100, flatten: bool = True)[source]¶
Calculate sine Coulomb matrix for crystals.
A variant of Coulomb matrix for periodic crystals.
The sine Coulomb matrix is identical to the Coulomb matrix, except that the inverse distance function is replaced by the inverse of sin**2 of the vector between sites which are periodic in the dimensions of the crystal lattice.
Features are flattened into a vector of matrix eigenvalues by default for ML-readiness. To ensure that all feature vectors are equal length, the maximum number of atoms (eigenvalues) in the input dataset must be specified.
This featurizer requires the optional dependencies pymatgen and matminer. It may be useful when crystal structures with 3D coordinates are available.
See [1]_ for more details.
References
- 1
Faber et al. “Crystal Structure Representations for Machine Learning Models of Formation Energies”, Inter. J. Quantum Chem. 115, 16, 2015. https://arxiv.org/abs/1503.07406
Examples
>>> import deepchem as dc >>> import pymatgen as mg >>> lattice = mg.core.Lattice.cubic(4.2) >>> structure = mg.core.Structure(lattice, ["Cs", "Cl"], [[0, 0, 0], [0.5, 0.5, 0.5]]) >>> featurizer = dc.feat.SineCoulombMatrix(max_atoms=2) >>> features = featurizer.featurize([structure]) >>> type(features[0]) <class 'numpy.ndarray'> >>> features[0].shape # (max_atoms,) (2,)
Note
This class requires matminer and Pymatgen to be installed.
- __init__(max_atoms: int = 100, flatten: bool = True)[source]¶
- Parameters
max_atoms (int (default 100)) – Maximum number of atoms for any crystal in the dataset. Used to pad the Coulomb matrix.
flatten (bool (default True)) – Return flattened vector of matrix eigenvalues.
- featurize(datapoints: Optional[Iterable[Union[Dict[str, Any], Any]]] = None, log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for crystal structures.
- Parameters
datapoints (Iterable[Union[Dict, pymatgen.core.Structure]]) – Iterable sequence of pymatgen structure dictionaries or pymatgen.core.Structure. Please confirm the dictionary representations of pymatgen.core.Structure from https://pymatgen.org/pymatgen.core.structure.html.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
CGCNNFeaturizer¶
- class CGCNNFeaturizer(radius: float = 8.0, max_neighbors: float = 12, step: float = 0.2)[source]¶
Calculate structure graph features for crystals.
Based on the implementation in Crystal Graph Convolutional Neural Networks (CGCNN). The method constructs a crystal graph representation including atom features and bond features (neighbor distances). Neighbors are determined by searching in a sphere around atoms in the unit cell. A Gaussian filter is applied to neighbor distances. All units are in angstrom.
This featurizer requires the optional dependency pymatgen. It may be useful when 3D coordinates are available and when using graph network models and crystal graph convolutional networks.
See [1]_ for more details.
References
- 1
T. Xie and J. C. Grossman, “Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties”, Phys. Rev. Lett. 120, 2018, https://arxiv.org/abs/1710.10324
Examples
>>> import deepchem as dc >>> import pymatgen as mg >>> featurizer = dc.feat.CGCNNFeaturizer() >>> lattice = mg.core.Lattice.cubic(4.2) >>> structure = mg.core.Structure(lattice, ["Cs", "Cl"], [[0, 0, 0], [0.5, 0.5, 0.5]]) >>> features = featurizer.featurize([structure]) >>> feature = features[0] >>> print(type(feature)) <class 'deepchem.feat.graph_data.GraphData'>
Note
This class requires Pymatgen to be installed.
- __init__(radius: float = 8.0, max_neighbors: float = 12, step: float = 0.2)[source]¶
- Parameters
radius (float (default 8.0)) – Radius of sphere for finding neighbors of atoms in unit cell.
max_neighbors (int (default 12)) – Maximum number of neighbors to consider when constructing graph.
step (float (default 0.2)) – Step size for Gaussian filter. This value is used when building edge features.
- featurize(datapoints: Optional[Iterable[Union[Dict[str, Any], Any]]] = None, log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for crystal structures.
- Parameters
datapoints (Iterable[Union[Dict, pymatgen.core.Structure]]) – Iterable sequence of pymatgen structure dictionaries or pymatgen.core.Structure. Please confirm the dictionary representations of pymatgen.core.Structure from https://pymatgen.org/pymatgen.core.structure.html.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
LCNNFeaturizer¶
- class LCNNFeaturizer(structure: Any, aos: List[str], pbc: List[bool], ns: int = 1, na: int = 1, cutoff: float = 6.0)[source]¶
Calculates the 2-D Surface graph features in 6 different permutations-
Based on the implementation of Lattice Graph Convolution Neural Network (LCNN). This method produces the Atom wise features ( One Hot Encoding) and Adjacent neighbour in the specified order of permutations. Neighbors are determined by first extracting a site local environment from the primitive cell, and perform graph matching and distance matching to find neighbors. First, the template of the Primitive cell needs to be defined along with periodic boundary conditions and active and spectator site details. structure(Data Point i.e different configuration of adsorbate atoms) is passed for featurization.
This particular featurization produces a regular-graph (equal number of Neighbors) along with its permutation in 6 symmetric axis. This transformation can be applied when orderering of neighboring of nodes around a site play an important role in the propert predictions. Due to consideration of local neighbor environment, this current implementation would be fruitful in finding neighbors for calculating formation energy of adbsorption tasks where the local. Adsorption turns out to be important in many applications such as catalyst and semiconductor design.
The permuted neighbors are calculated using the Primitive cells i.e periodic cells in all the data points are built via lattice transformation of the primitive cell.
Primitive cell Format:
- Pymatgen structure object with site_properties key value
- “SiteTypes” mentioning if it is a active site “A1” or spectator
site “S1”.
ns , the number of spectator types elements. For “S1” its 1.
na , the number of active types elements. For “A1” its 1.
aos, the different species of active elements “A1”.
pbc, the periodic boundary conditions.
Data point Structure Format(Configuration of Atoms):
- Pymatgen structure object with site_properties with following key value.
- “SiteTypes”, mentioning if it is a active site “A1” or spectator
site “S1”.
“oss”, different occupational sites. For spectator sites make it -1.
It is highly recommended that cells of data are directly redefined from the primitive cell, specifically, the relative coordinates between sites are consistent so that the lattice is non-deviated.
References
- 1
Jonathan Lym and Geun Ho Gu, J. Phys. Chem. C 2019, 123, 18951−18959
Examples
>>> import deepchem as dc >>> from pymatgen.core import Structure >>> import numpy as np >>> PRIMITIVE_CELL = { ... "lattice": [[2.818528, 0.0, 0.0], ... [-1.409264, 2.440917, 0.0], ... [0.0, 0.0, 25.508255]], ... "coords": [[0.66667, 0.33333, 0.090221], ... [0.33333, 0.66667, 0.18043936], ... [0.0, 0.0, 0.27065772], ... [0.66667, 0.33333, 0.36087608], ... [0.33333, 0.66667, 0.45109444], ... [0.0, 0.0, 0.49656991]], ... "species": ['H', 'H', 'H', 'H', 'H', 'He'], ... "site_properties": {'SiteTypes': ['S1', 'S1', 'S1', 'S1', 'S1', 'A1']} ... } >>> PRIMITIVE_CELL_INF0 = { ... "cutoff": np.around(6.00), ... "structure": Structure(**PRIMITIVE_CELL), ... "aos": ['1', '0', '2'], ... "pbc": [True, True, False], ... "ns": 1, ... "na": 1 ... } >>> DATA_POINT = { ... "lattice": [[1.409264, -2.440917, 0.0], ... [4.227792, 2.440917, 0.0], ... [0.0, 0.0, 23.17559]], ... "coords": [[0.0, 0.0, 0.099299], ... [0.0, 0.33333, 0.198598], ... [0.5, 0.16667, 0.297897], ... [0.0, 0.0, 0.397196], ... [0.0, 0.33333, 0.496495], ... [0.5, 0.5, 0.099299], ... [0.5, 0.83333, 0.198598], ... [0.0, 0.66667, 0.297897], ... [0.5, 0.5, 0.397196], ... [0.5, 0.83333, 0.496495], ... [0.0, 0.66667, 0.54654766], ... [0.5, 0.16667, 0.54654766]], ... "species": ['H', 'H', 'H', 'H', 'H', 'H', ... 'H', 'H', 'H', 'H', 'He', 'He'], ... "site_properties": { ... "SiteTypes": ['S1', 'S1', 'S1', 'S1', 'S1', ... 'S1', 'S1', 'S1', 'S1', 'S1', ... 'A1', 'A1'], ... "oss": ['-1', '-1', '-1', '-1', '-1', '-1', ... '-1', '-1', '-1', '-1', '0', '2'] ... } ... } >>> featuriser = dc.feat.LCNNFeaturizer(**PRIMITIVE_CELL_INF0) >>> print(type(featuriser._featurize(Structure(**DATA_POINT)))) <class 'deepchem.feat.graph_data.GraphData'>
Notes
This Class requires pymatgen , networkx , scipy installed.
- __init__(structure: Any, aos: List[str], pbc: List[bool], ns: int = 1, na: int = 1, cutoff: float = 6.0)[source]¶
- Parameters
structure (: PymatgenStructure) – Pymatgen Structure object of the primitive cell used for calculating neighbors from lattice transformations.It also requires site_properties attribute with “Sitetypes”(Active or spectator site).
aos (List[str]) – A list of all the active site species. For the Pt, N, NO configuration set it as [‘0’, ‘1’, ‘2’]
pbc (List[bool]) – Periodic Boundary Condition
ns (int (default 1)) – The number of spectator types elements. For “S1” its 1.
na (int (default 1)) – the number of active types elements. For “A1” its 1.
cutoff (float (default 6.00)) – Cutoff of radius for getting local environment.Only used down to 2 digits.
- featurize(datapoints: Optional[Iterable[Union[Dict[str, Any], Any]]] = None, log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for crystal structures.
- Parameters
datapoints (Iterable[Union[Dict, pymatgen.core.Structure]]) – Iterable sequence of pymatgen structure dictionaries or pymatgen.core.Structure. Please confirm the dictionary representations of pymatgen.core.Structure from https://pymatgen.org/pymatgen.core.structure.html.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
Molecule Tokenizers¶
A tokenizer is in charge of preparing the inputs for a natural language processing model. For many scientific applications, it is possible to treat inputs as “words”/”sentences” and use NLP methods to make meaningful predictions. For example, SMILES strings or DNA sequences have grammatical structure and can be usefully modeled with NLP techniques. DeepChem provides some scientifically relevant tokenizers for use in different applications. These tokenizers are based on those from the Huggingface transformers library (which DeepChem tokenizers inherit from).
The base classes PreTrainedTokenizer and PreTrainedTokenizerFast implements the common methods for encoding string inputs in model inputs and instantiating/saving python tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace’s AWS S3 repository).
PreTrainedTokenizer (transformers.PreTrainedTokenizer) thus implements the main methods for using all the tokenizers:
Tokenizing (spliting strings in sub-word token strings), converting tokens strings to ids and back, and encoding/decoding (i.e. tokenizing + convert to integers)
Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece…)
Managing special tokens like mask, beginning-of-sentence, etc tokens (adding them, assigning them to attributes in the tokenizer for easy access and making sure they are not split during tokenization)
BatchEncoding holds the output of the tokenizer’s encoding methods (__call__, encode_plus and batch_encode_plus) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behave just like a standard python dictionary and hold the various model inputs computed by these methodes (input_ids, attention_mask…). For more details on the base tokenizers which the DeepChem tokenizers inherit from, please refer to the following: HuggingFace tokenizers docs
Tokenization methods on string-based corpuses in the life sciences are becoming increasingly popular for NLP-based applications to chemistry and biology. One such example is ChemBERTa, a transformer for molecular property prediction. DeepChem offers a tutorial for utilizing ChemBERTa using an alternate tokenizer, a Byte-Piece Encoder, which can be found here.
SmilesTokenizer¶
The dc.feat.SmilesTokenizer
module inherits from the BertTokenizer class in transformers.
It runs a WordPiece tokenization algorithm over SMILES strings using the tokenisation SMILES regex developed by Schwaller et. al.
The SmilesTokenizer employs an atom-wise tokenization strategy using the following Regex expression:
SMI_REGEX_PATTERN = "(\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=|#||\+|\\\\\/|:||@|\?|>|\*|\$|\%[0–9]{2}|[0–9])"
To use, please install the transformers package using the following pip command:
pip install transformers
References:
- class SmilesTokenizer(vocab_file: str = '', **kwargs)[source]¶
Creates the SmilesTokenizer class. The tokenizer heavily inherits from the BertTokenizer implementation found in Huggingface’s transformers library. It runs a WordPiece tokenization algorithm over SMILES strings using the tokenisation SMILES regex developed by Schwaller et. al.
Please see https://github.com/huggingface/transformers and https://github.com/rxn4chemistry/rxnfp for more details.
Examples
>>> from deepchem.feat.smiles_tokenizer import SmilesTokenizer >>> current_dir = os.path.dirname(os.path.realpath(__file__)) >>> vocab_path = os.path.join(current_dir, 'tests/data', 'vocab.txt') >>> tokenizer = SmilesTokenizer(vocab_path) >>> print(tokenizer.encode("CC(=O)OC1=CC=CC=C1C(=O)O")) [12, 16, 16, 17, 22, 19, 18, 19, 16, 20, 22, 16, 16, 22, 16, 16, 22, 16, 20, 16, 17, 22, 19, 18, 19, 13]
References
- 1
Schwaller, Philippe; Probst, Daniel; Vaucher, Alain C.; Nair, Vishnu H; Kreutter, David; Laino, Teodoro; et al. (2019): Mapping the Space of Chemical Reactions using Attention-Based Neural Networks. ChemRxiv. Preprint. https://doi.org/10.26434/chemrxiv.9897365.v3
Note
This class requires huggingface’s transformers and tokenizers libraries to be installed.
- __init__(vocab_file: str = '', **kwargs)[source]¶
Constructs a SmilesTokenizer.
- Parameters
vocab_file (str) – Path to a SMILES character per line vocabulary file. Default vocab file is found in deepchem/feat/tests/data/vocab.txt
- convert_tokens_to_string(tokens: List[str])[source]¶
Converts a sequence of tokens (string) in a single string.
- Parameters
tokens (List[str]) – List of tokens for a given string sequence.
- Returns
out_string – Single string from combined tokens.
- Return type
str
- add_special_tokens_ids_single_sequence(token_ids: List[Optional[int]])[source]¶
Adds special tokens to the a sequence for sequence classification tasks.
A BERT sequence has the following format: [CLS] X [SEP]
- Parameters
token_ids (list[int]) – list of tokenized input ids. Can be obtained using the encode or encode_plus methods.
- add_special_tokens_single_sequence(tokens: List[str])[source]¶
Adds special tokens to the a sequence for sequence classification tasks. A BERT sequence has the following format: [CLS] X [SEP]
- Parameters
tokens (List[str]) – List of tokens for a given string sequence.
- add_special_tokens_ids_sequence_pair(token_ids_0: List[Optional[int]], token_ids_1: List[Optional[int]]) List[Optional[int]] [source]¶
Adds special tokens to a sequence pair for sequence classification tasks. A BERT sequence pair has the following format: [CLS] A [SEP] B [SEP]
- Parameters
token_ids_0 (List[int]) – List of ids for the first string sequence in the sequence pair (A).
token_ids_1 (List[int]) – List of tokens for the second string sequence in the sequence pair (B).
- add_padding_tokens(token_ids: List[Optional[int]], length: int, right: bool = True) List[Optional[int]] [source]¶
Adds padding tokens to return a sequence of length max_length. By default padding tokens are added to the right of the sequence.
- Parameters
token_ids (list[optional[int]]) – list of tokenized input ids. Can be obtained using the encode or encode_plus methods.
length (int) – TODO
right (bool, default True) – TODO
- Returns
TODO
- Return type
List[int]
- save_vocabulary(save_directory: str, filename_prefix: Optional[str] = None)[source]¶
Save the tokenizer vocabulary to a file.
- Parameters
vocab_path (obj: str) – The directory in which to save the SMILES character per line vocabulary file. Default vocab file is found in deepchem/feat/tests/data/vocab.txt
- Returns
vocab_file – Paths to the files saved. typle with string to a SMILES character per line vocabulary file. Default vocab file is found in deepchem/feat/tests/data/vocab.txt
- Return type
Tuple
BasicSmilesTokenizer¶
The dc.feat.BasicSmilesTokenizer
module uses a regex tokenization pattern to tokenise SMILES strings.
The regex is developed by Schwaller et. al. The tokenizer is to be used on SMILES in cases
where the user wishes to not rely on the transformers API.
References:
- class BasicSmilesTokenizer(regex_pattern: str = '(\\[[^\\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\\(|\\)|\\.|=|#|-|\\+|\\\\|\\/|:|~|@|\\?|>>?|\\*|\\$|\\%[0-9]{2}|[0-9])')[source]¶
Run basic SMILES tokenization using a regex pattern developed by Schwaller et. al. This tokenizer is to be used when a tokenizer that does not require the transformers library by HuggingFace is required.
Examples
>>> from deepchem.feat.smiles_tokenizer import BasicSmilesTokenizer >>> tokenizer = BasicSmilesTokenizer() >>> print(tokenizer.tokenize("CC(=O)OC1=CC=CC=C1C(=O)O")) ['C', 'C', '(', '=', 'O', ')', 'O', 'C', '1', '=', 'C', 'C', '=', 'C', 'C', '=', 'C', '1', 'C', '(', '=', 'O', ')', 'O']
References
- 1
Philippe Schwaller, Teodoro Laino, Théophile Gaudin, Peter Bolgar, Christopher A. Hunter, Costas Bekas, and Alpha A. Lee ACS Central Science 2019 5 (9): Molecular Transformer: A Model for Uncertainty-Calibrated Chemical Reaction Prediction 1572-1583 DOI: 10.1021/acscentsci.9b00576
HuggingFaceFeaturizer¶
- class HuggingFaceFeaturizer(tokenizer: transformers.tokenization_utils_fast.PreTrainedTokenizerFast)[source]¶
Wrapper class that wraps HuggingFace tokenizers as DeepChem featurizers
The HuggingFaceFeaturizer wrapper provides a wrapper around Hugging Face tokenizers allowing them to be used as DeepChem featurizers. This might be useful in scenarios where user needs to use a hugging face tokenizer when loading a dataset.
Example
>>> from deepchem.feat import HuggingFaceFeaturizer >>> from transformers import RobertaTokenizerFast >>> hf_tokenizer = RobertaTokenizerFast.from_pretrained("seyonec/PubChem10M_SMILES_BPE_60k") >>> featurizer = HuggingFaceFeaturizer(tokenizer=hf_tokenizer) >>> result = featurizer.featurize(['CC(=O)C'])
- __init__(tokenizer: transformers.tokenization_utils_fast.PreTrainedTokenizerFast)[source]¶
Initializes a tokenizer wrapper
- Parameters
tokenizer (transformers.tokenization_utils_fast.PreTrainedTokenizerFast) – The tokenizer to use for featurization
- featurize(datapoints: Iterable[Any], log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for datapoints.
- Parameters
datapoints (Iterable[Any]) – A sequence of objects that you’d like to featurize. Subclassses of Featurizer should instantiate the _featurize method that featurizes objects in the sequence.
log_every_n (int, default 1000) – Logs featurization progress every log_every_n steps.
- Returns
A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
GroverAtomVocabTokenizer¶
- class GroverAtomVocabTokenizer(fname: str)[source]¶
Grover Atom Vocabulary Tokenizer
The Grover Atom vocab tokenizer is used for tokenizing an atom using a vocabulary generated by GroverAtomVocabularyBuilder.
Example
>>> import tempfile >>> import deepchem as dc >>> from deepchem.feat.vocabulary_builders.grover_vocab import GroverAtomVocabularyBuilder >>> file = tempfile.NamedTemporaryFile() >>> dataset = dc.data.NumpyDataset(X=[['CC(=O)C', 'CCC']]) >>> vocab = GroverAtomVocabularyBuilder() >>> vocab.build(dataset) >>> vocab.save(file.name) # build and save the vocabulary >>> atom_tokenizer = GroverAtomVocabTokenizer(file.name) >>> mol = Chem.MolFromSmiles('CC(=O)C') >>> atom_tokenizer.featurize([(mol, mol.GetAtomWithIdx(0))])[0] 2
- Parameters
fname (str) – Filename of vocabulary generated by GroverAtomVocabularyBuilder
GroverBondVocabTokenizer¶
- class GroverBondVocabTokenizer(fname: str)[source]¶
Grover Bond Vocabulary Tokenizer
The Grover Bond vocab tokenizer is used for tokenizing a bond using a vocabulary generated by GroverBondVocabularyBuilder.
Example
>>> import tempfile >>> import deepchem as dc >>> from deepchem.feat.vocabulary_builders.grover_vocab import GroverBondVocabularyBuilder >>> file = tempfile.NamedTemporaryFile() >>> dataset = dc.data.NumpyDataset(X=[['CC(=O)C', 'CCC']]) >>> vocab = GroverBondVocabularyBuilder() >>> vocab.build(dataset) >>> vocab.save(file.name) # build and save the vocabulary >>> bond_tokenizer = GroverBondVocabTokenizer(file.name) >>> mol = Chem.MolFromSmiles('CC(=O)C') >>> bond_tokenizer.featurize([(mol, mol.GetBondWithIdx(0))])[0] 2
- Parameters
fname (str) – Filename of vocabulary generated by GroverAtomVocabularyBuilder
Vocabulary Builders¶
Tokenizers uses a vocabulary to tokenize the datapoint. To build a vocabulary, an algorithm which generates vocabulary from a corpus is required. A corpus is usually a collection of molecules, DNA sequences etc. DeepChem provides the following algorithms to build vocabulary from a corpus. A vocabulary builder is not a featurizer. It is an utility which helps the tokenizers to featurize datapoints.
- class GroverAtomVocabularyBuilder(max_size: Optional[int] = None)[source]¶
Atom Vocabulary Builder for Grover
This module can be used to generate atom vocabulary from SMILES strings for the GROVER pretraining task. For each atom in a molecule, the vocabulary context is the node-edge-count of the atom where node is the neighboring atom, edge is the type of bond (single bond or double bound) and count is the number of such node-edge pairs for the atom in its neighborhood. For example, for the molecule ‘CC(=O)C’, the context of the first carbon atom is C-SINGLE1 because it’s neighbor is C atom, the type of bond is SINGLE bond and the count of such bonds is 1. The context of the second carbon atom is C-SINGLE2 and O-DOUBLE1 because it is connected to two carbon atoms by a single bond and 1 O atom by a double bond. The vocabulary of an atom is then computed as the atom-symbol_contexts where the contexts are sorted in alphabetical order when there are multiple contexts. For example, the vocabulary of second C is C_C-SINGLE2_O-DOUBLE1. The algorithm enumerates vocabulary of all atoms in the dataset and makes a vocabulary to index mapping by sorting the vocabulary by frequency and then alphabetically.
The algorithm enumerates vocabulary of all atoms in the dataset and makes a vocabulary to index mapping by sorting the vocabulary by frequency and then alphabetically. The max_size parameter can be used for setting the size of the vocabulary. When this parameter is set, the algorithm stops adding new words to the index when the vocabulary size reaches max_size.
- Parameters
max_size (int (optional)) – Maximum size of vocabulary
Example
>>> import tempfile >>> import deepchem as dc >>> from rdkit import Chem >>> file = tempfile.NamedTemporaryFile() >>> dataset = dc.data.NumpyDataset(X=[['CCC'], ['CC(=O)C']]) >>> vocab = GroverAtomVocabularyBuilder() >>> vocab.build(dataset) >>> vocab.stoi {'<pad>': 0, '<other>': 1, 'C_C-SINGLE1': 2, 'C_C-SINGLE2': 3, 'C_C-SINGLE2_O-DOUBLE1': 4, 'O_C-DOUBLE1': 5} >>> vocab.save(file.name) >>> loaded_vocab = GroverAtomVocabularyBuilder.load(file.name) >>> mol = Chem.MolFromSmiles('CC(=O)C') >>> loaded_vocab.encode(mol, mol.GetAtomWithIdx(1)) 4
- build(dataset: deepchem.data.datasets.Dataset) None [source]¶
Builds vocabulary
- Parameters
dataset (dc.data.Dataset) – A dataset object with SMILEs strings in X attribute.
- save(fname: str) None [source]¶
Saves a vocabulary in json format
- fname: str
Filename to save vocabulary
- classmethod load(fname: str) deepchem.feat.vocabulary_builders.grover_vocab.GroverAtomVocabularyBuilder [source]¶
Loads vocabulary from the specified json file
- Parameters
fname (str) – JSON file containing vocabulary
- Returns
vocab – A grover atom vocabulary builder which can be used for encoding
- Return type
- static atom_to_vocab(mol: Any, atom: Any) str [source]¶
Convert atom to vocabulary.
- Parameters
mol (RDKitMol) – an molecule object
atom (RDKitAtom) – the target atom.
- Returns
vocab – The generated atom vocabulary with its contexts.
- Return type
str
Example
>>> from rdkit import Chem >>> mol = Chem.MolFromSmiles('[C@@H](C)C(=O)O') >>> GroverAtomVocabularyBuilder.atom_to_vocab(mol, mol.GetAtomWithIdx(0)) 'C_C-SINGLE2' >>> GroverAtomVocabularyBuilder.atom_to_vocab(mol, mol.GetAtomWithIdx(3)) 'O_C-DOUBLE1'
- class GroverAtomVocabularyBuilder(max_size: Optional[int] = None)[source]¶
Atom Vocabulary Builder for Grover
This module can be used to generate atom vocabulary from SMILES strings for the GROVER pretraining task. For each atom in a molecule, the vocabulary context is the node-edge-count of the atom where node is the neighboring atom, edge is the type of bond (single bond or double bound) and count is the number of such node-edge pairs for the atom in its neighborhood. For example, for the molecule ‘CC(=O)C’, the context of the first carbon atom is C-SINGLE1 because it’s neighbor is C atom, the type of bond is SINGLE bond and the count of such bonds is 1. The context of the second carbon atom is C-SINGLE2 and O-DOUBLE1 because it is connected to two carbon atoms by a single bond and 1 O atom by a double bond. The vocabulary of an atom is then computed as the atom-symbol_contexts where the contexts are sorted in alphabetical order when there are multiple contexts. For example, the vocabulary of second C is C_C-SINGLE2_O-DOUBLE1. The algorithm enumerates vocabulary of all atoms in the dataset and makes a vocabulary to index mapping by sorting the vocabulary by frequency and then alphabetically.
The algorithm enumerates vocabulary of all atoms in the dataset and makes a vocabulary to index mapping by sorting the vocabulary by frequency and then alphabetically. The max_size parameter can be used for setting the size of the vocabulary. When this parameter is set, the algorithm stops adding new words to the index when the vocabulary size reaches max_size.
- Parameters
max_size (int (optional)) – Maximum size of vocabulary
Example
>>> import tempfile >>> import deepchem as dc >>> from rdkit import Chem >>> file = tempfile.NamedTemporaryFile() >>> dataset = dc.data.NumpyDataset(X=[['CCC'], ['CC(=O)C']]) >>> vocab = GroverAtomVocabularyBuilder() >>> vocab.build(dataset) >>> vocab.stoi {'<pad>': 0, '<other>': 1, 'C_C-SINGLE1': 2, 'C_C-SINGLE2': 3, 'C_C-SINGLE2_O-DOUBLE1': 4, 'O_C-DOUBLE1': 5} >>> vocab.save(file.name) >>> loaded_vocab = GroverAtomVocabularyBuilder.load(file.name) >>> mol = Chem.MolFromSmiles('CC(=O)C') >>> loaded_vocab.encode(mol, mol.GetAtomWithIdx(1)) 4
- build(dataset: deepchem.data.datasets.Dataset) None [source]¶
Builds vocabulary
- Parameters
dataset (dc.data.Dataset) – A dataset object with SMILEs strings in X attribute.
- save(fname: str) None [source]¶
Saves a vocabulary in json format
- fname: str
Filename to save vocabulary
- classmethod load(fname: str) deepchem.feat.vocabulary_builders.grover_vocab.GroverAtomVocabularyBuilder [source]¶
Loads vocabulary from the specified json file
- Parameters
fname (str) – JSON file containing vocabulary
- Returns
vocab – A grover atom vocabulary builder which can be used for encoding
- Return type
- static atom_to_vocab(mol: Any, atom: Any) str [source]¶
Convert atom to vocabulary.
- Parameters
mol (RDKitMol) – an molecule object
atom (RDKitAtom) – the target atom.
- Returns
vocab – The generated atom vocabulary with its contexts.
- Return type
str
Example
>>> from rdkit import Chem >>> mol = Chem.MolFromSmiles('[C@@H](C)C(=O)O') >>> GroverAtomVocabularyBuilder.atom_to_vocab(mol, mol.GetAtomWithIdx(0)) 'C_C-SINGLE2' >>> GroverAtomVocabularyBuilder.atom_to_vocab(mol, mol.GetAtomWithIdx(3)) 'O_C-DOUBLE1'
Sequence Featurizers¶
PFMFeaturizer¶
The dc.feat.PFMFeaturizer
module implements a featurizer for position frequency matrices.
This takes in a list of multisequence alignments and returns a list of position frequency matrices.
- class PFMFeaturizer(charset: List[str] = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y', 'X', 'Z', 'B', 'U', 'O'], max_length: Optional[int] = 100)[source]¶
Encodes a list position frequency matrices for a given list of multiple sequence alignments
The default character set is 25 amino acids. If you want to use a different character set, such as nucleotides, simply pass in a list of character strings in the featurizer constructor.
The max_length parameter is the maximum length of the sequences to be featurized. If you want to featurize longer sequences, modify the max_length parameter in the featurizer constructor.
The final row in the position frequency matrix is the unknown set, if there are any characters which are not included in the charset.
Examples
>>> from deepchem.feat.sequence_featurizers import PFMFeaturizer >>> from deepchem.data import NumpyDataset >>> msa = NumpyDataset(X=[['ABC','BCD'],['AAA','AAB']], ids=[['seq01','seq02'],['seq11','seq12']]) >>> seqs = msa.X >>> featurizer = PFMFeaturizer() >>> pfm = featurizer.featurize(seqs) >>> pfm.shape (2, 26, 100)
- __init__(charset: List[str] = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y', 'X', 'Z', 'B', 'U', 'O'], max_length: Optional[int] = 100)[source]¶
Initialize featurizer.
- Parameters
charset (List[str] (default CHARSET)) – A list of strings, where each string is length 1 and unique.
max_length (int, optional (default 25)) – Maximum length of sequences to be featurized.
Other Featurizers¶
BertFeaturizer¶
- class BertFeaturizer(tokenizer: transformers.models.bert.tokenization_bert_fast.BertTokenizerFast)[source]¶
Bert Featurizer.
Bert Featurizer. The Bert Featurizer is a wrapper class for HuggingFace’s BertTokenizerFast. This class intends to allow users to use the BertTokenizer API while remaining inside the DeepChem ecosystem.
Examples
>>> from deepchem.feat import BertFeaturizer >>> from transformers import BertTokenizerFast >>> tokenizer = BertTokenizerFast.from_pretrained("Rostlab/prot_bert", do_lower_case=False) >>> featurizer = BertFeaturizer(tokenizer) >>> feats = featurizer.featurize('D L I P [MASK] L V T')
Notes
Examples are based on RostLab’s ProtBert documentation.
- featurize(datapoints: Iterable[Any], log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for datapoints.
- Parameters
datapoints (Iterable[Any]) – A sequence of objects that you’d like to featurize. Subclassses of Featurizer should instantiate the _featurize method that featurizes objects in the sequence.
log_every_n (int, default 1000) – Logs featurization progress every log_every_n steps.
- Returns
A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
RobertaFeaturizer¶
- class RobertaFeaturizer(**kwargs)[source]¶
Roberta Featurizer.
The Roberta Featurizer is a wrapper class of the Roberta Tokenizer, which is used by Huggingface’s transformers library for tokenizing large corpuses for Roberta Models. Please confirm the details in [1]_.
Please see https://github.com/huggingface/transformers and https://github.com/seyonechithrananda/bert-loves-chemistry for more details.
Examples
>>> from deepchem.feat import RobertaFeaturizer >>> smiles = ["Cn1c(=O)c2c(ncn2C)n(C)c1=O", "CC(=O)N1CN(C(C)=O)C(O)C1O"] >>> featurizer = RobertaFeaturizer.from_pretrained("seyonec/SMILES_tokenized_PubChem_shard00_160k") >>> out = featurizer.featurize(smiles, add_special_tokens=True, truncation=True)
References
- 1
Chithrananda, Seyone, Grand, Gabriel, and Ramsundar, Bharath (2020): “Chemberta: Large-scale self-supervised pretraining for molecular property prediction.” arXiv. preprint. arXiv:2010.09885.
Note
This class requires transformers to be installed. RobertaFeaturizer uses dual inheritance with RobertaTokenizerFast in Huggingface for rapid tokenization, as well as DeepChem’s MolecularFeaturizer class.
- add_special_tokens(special_tokens_dict: Dict[str, Union[str, tokenizers.AddedToken]], replace_additional_special_tokens=True) int [source]¶
Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).
Note,None When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the [~PreTrainedModel.resize_token_embeddings] method.
Using add_special_tokens will ensure your special tokens can be used in several ways:
Special tokens are carefully handled by the tokenizer (they are never split).
You can easily refer to special tokens using tokenizer class attributes like tokenizer.cls_token. This makes it easy to develop model-agnostic training and fine-tuning scripts.
When possible, special tokens are already registered for provided pretrained models (for instance [BertTokenizer] cls_token is already registered to be :obj*’[CLS]’* and XLM’s one is also registered to be ‘</s>’).
- Parameters
special_tokens_dict (dictionary str to str or tokenizers.AddedToken) –
Keys should be in the list of predefined special attributes: [bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens].
Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the unk_token to them).
replace_additional_special_tokens (bool, optional,, defaults to True) – If True, the existing list of additional special tokens will be replaced by the one specified in special_tokens_dict. Otherwise, self._additional_special_tokens is updated. In the former case, the tokens will NOT be removed from the tokenizer’s full vocabulary - they are only being flagged as non-special tokens.
- Returns
Number of tokens added to the vocabulary.
- Return type
int
Examples:
```python # Let’s see how to add a new classification token to GPT-2 tokenizer = GPT2Tokenizer.from_pretrained(“gpt2”) model = GPT2Model.from_pretrained(“gpt2”)
special_tokens_dict = {“cls_token”: “<CLS>”}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) print(“We have added”, num_added_toks, “tokens”) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer. model.resize_token_embeddings(len(tokenizer))
- add_tokens(new_tokens: Union[str, tokenizers.AddedToken, List[Union[str, tokenizers.AddedToken]]], special_tokens: bool = False) int [source]¶
Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary and and will be isolated before the tokenization algorithm is applied. Added tokens and tokens from the vocabulary of the tokenization algorithm are therefore not treated in the same way.
Note, when adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the [~PreTrainedModel.resize_token_embeddings] method.
- Parameters
new_tokens (str, tokenizers.AddedToken or a list of str or tokenizers.AddedToken) – Tokens are only added if they are not already in the vocabulary. tokenizers.AddedToken wraps a string token to let you personalize its behavior: whether this token should only match against a single word, whether this token should strip all potential whitespaces on the left side, whether this token should strip all potential whitespaces on the right side, etc.
special_tokens (bool, optional, defaults to False) –
Can be used to specify if the token is a special token. This mostly change the normalization behavior (special tokens like CLS or [MASK] are usually not lower-cased for instance).
See details for tokenizers.AddedToken in HuggingFace tokenizers library.
- Returns
Number of tokens added to the vocabulary.
- Return type
int
Examples:
```python # Let’s see how to increase the vocabulary of Bert model and tokenizer tokenizer = BertTokenizerFast.from_pretrained(“bert-base-uncased”) model = BertModel.from_pretrained(“bert-base-uncased”)
num_added_toks = tokenizer.add_tokens([“new_tok1”, “my_new-tok2”]) print(“We have added”, num_added_toks, “tokens”) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer. model.resize_token_embeddings(len(tokenizer)) ```
- property additional_special_tokens: List[str][source]¶
All the additional special tokens you may want to use. Log an error if used while not having been set.
- Type
List[str]
- property additional_special_tokens_ids: List[int][source]¶
Ids of all the additional special tokens in the vocabulary. Log an error if used while not having been set.
- Type
List[int]
- property all_special_ids: List[int][source]¶
List the ids of the special tokens(‘<unk>’, ‘<cls>’, etc.) mapped to class attributes.
- Type
List[int]
- property all_special_tokens: List[str][source]¶
All the special tokens (‘<unk>’, ‘<cls>’, etc.) mapped to class attributes.
Convert tokens of tokenizers.AddedToken type to string.
- Type
List[str]
- property all_special_tokens_extended: List[Union[str, tokenizers.AddedToken]][source]¶
All the special tokens (‘<unk>’, ‘<cls>’, etc.) mapped to class attributes.
Don’t convert tokens of tokenizers.AddedToken type to string so they can be used to control more finely how special tokens are tokenized.
- Type
List[Union[str, tokenizers.AddedToken]]
- as_target_tokenizer()[source]¶
Temporarily sets the tokenizer for encoding the targets. Useful for tokenizer associated to sequence-to-sequence models that need a slightly different processing for the labels.
- property backend_tokenizer: tokenizers.Tokenizer[source]¶
The Rust tokenizer used as a backend.
- Type
tokenizers.implementations.BaseTokenizer
- batch_decode(sequences: Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = None, **kwargs) List[str] [source]¶
Convert a list of lists of token ids into a list of strings by calling decode.
- Parameters
sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]) – List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) – Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) – Will be passed to the underlying model specific decode method.
- Returns
The list of decoded sentences.
- Return type
List[str]
- batch_encode_plus(batch_text_or_text_pairs: Union[List[str], List[Tuple[str, str]], List[List[str]], List[Tuple[List[str], List[str]]], List[List[int]], List[Tuple[List[int], List[int]]]], add_special_tokens: bool = True, padding: Union[bool, str, transformers.utils.generic.PaddingStrategy] = False, truncation: Optional[Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy]] = None, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.utils.generic.TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs) transformers.tokenization_utils_base.BatchEncoding [source]¶
Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.
<Tip warning={true}>
This method is deprecated, __call__ should be used instead.
</Tip>
- Parameters
batch_text_or_text_pairs (List[str], List[Tuple[str, str]], List[List[str]], List[Tuple[List[str], List[str]]], and for not-fast tokenizers, also List[List[int]], List[Tuple[List[int], List[int]]]) – Batch of sequences or pair of sequences to be encoded. This can be a list of string/string-sequences/int-sequences or a list of pair of string/string-sequences/int-sequence (see details in encode_plus).
add_special_tokens (bool, optional, defaults to True) – Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –
Activates and controls padding. Accepts the following values:
True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to False) –
Activates and controls truncation. Accepts the following values:
True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) –
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or [~utils.TensorType], optional) –
If set, will return tensors instead of list of python integers. Acceptable values are:
’tf’: Return TensorFlow tf.constant objects.
’pt’: Return PyTorch torch.Tensor objects.
’np’: Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) –
Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.
[What are token type IDs?](../glossary#token-type-ids)
return_attention_mask (bool, optional) –
Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
[What are attention masks?](../glossary#attention-mask)
return_overflowing_tokens (bool, optional, defaults to False) – Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) – Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) –
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from [PreTrainedTokenizerFast], if using Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) – Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) – Whether or not to print more information and warnings.
**kwargs – passed to the self.tokenize() method
- Returns
A [BatchEncoding] with the following fields:
input_ids – List of token ids to be fed to a model.
[What are input IDs?](../glossary#input-ids)
token_type_ids – List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).
[What are token type IDs?](../glossary#token-type-ids)
attention_mask – List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).
[What are attention masks?](../glossary#attention-mask)
overflowing_tokens – List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).
num_truncated_tokens – Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).
special_tokens_mask – List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length – The length of the inputs (when return_length=True)
- Return type
[BatchEncoding]
- property bos_token: str[source]¶
Beginning of sentence token. Log an error if used while not having been set.
- Type
str
- property bos_token_id: Optional[int][source]¶
Id of the beginning of sentence token in the vocabulary. Returns None if the token has not been set.
- Type
Optional[int]
- build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[source]¶
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.
This implementation does not add special tokens and this method should be overridden in a subclass.
- Parameters
token_ids_0 (List[int]) – The first tokenized sequence.
token_ids_1 (List[int], optional) – The second tokenized sequence.
- Returns
The model input with special tokens.
- Return type
List[int]
- static clean_up_tokenization(out_string: str) str [source]¶
Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms.
- Parameters
out_string (str) – The text to clean up.
- Returns
The cleaned-up string.
- Return type
str
- property cls_token: str[source]¶
Classification token, to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set.
- Type
str
- property cls_token_id: Optional[int][source]¶
Id of the classification token in the vocabulary, to extract a summary of an input sequence leveraging self-attention along the full depth of the model.
Returns None if the token has not been set.
- Type
Optional[int]
- convert_ids_to_tokens(ids: Union[int, List[int]], skip_special_tokens: bool = False) Union[str, List[str]] [source]¶
Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.
- Parameters
ids (int or List[int]) – The token id (or token ids) to convert to tokens.
skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.
- Returns
The decoded token(s).
- Return type
str or List[str]
- convert_tokens_to_ids(tokens: Union[str, List[str]]) Union[int, List[int]] [source]¶
Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.
- Parameters
tokens (str or List[str]) – One or several token(s) to convert to token id(s).
- Returns
The token id or list of token ids.
- Return type
int or List[int]
- convert_tokens_to_string(tokens: List[str]) str [source]¶
Converts a sequence of tokens in a single string. The most simple way to do it is ” “.join(tokens) but we often want to remove sub-word tokenization artifacts at the same time.
- Parameters
tokens (List[str]) – The token to join in a string.
- Returns
The joined tokens.
- Return type
str
- create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) List[int] [source]¶
Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not make use of token type ids, therefore a list of zeros is returned.
- Parameters
token_ids_0 (List[int]) – List of IDs.
token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.
- Returns
List of zeros.
- Return type
List[int]
- decode(token_ids: Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = None, **kwargs) str [source]¶
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
- Parameters
token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) – List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) – Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) – Will be passed to the underlying model specific decode method.
- Returns
The decoded sentence.
- Return type
str
- property decoder: tokenizers.decoders.Decoder[source]¶
The Rust decoder for this tokenizer.
- Type
tokenizers.decoders.Decoder
- encode(text: Union[str, List[str], List[int]], text_pair: Optional[Union[str, List[str], List[int]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, transformers.utils.generic.PaddingStrategy] = False, truncation: Optional[Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy]] = None, max_length: Optional[int] = None, stride: int = 0, return_tensors: Optional[Union[str, transformers.utils.generic.TensorType]] = None, **kwargs) List[int] [source]¶
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing self.convert_tokens_to_ids(self.tokenize(text)).
- Parameters
text (str, List[str] or List[int]) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
text_pair (str, List[str] or List[int], optional) – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
add_special_tokens (bool, optional, defaults to True) – Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –
Activates and controls padding. Accepts the following values:
True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to False) –
Activates and controls truncation. Accepts the following values:
True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) –
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or [~utils.TensorType], optional) –
If set, will return tensors instead of list of python integers. Acceptable values are:
’tf’: Return TensorFlow tf.constant objects.
’pt’: Return PyTorch torch.Tensor objects.
’np’: Return Numpy np.ndarray objects.
**kwargs – Passed along to the .tokenize() method.
- Returns
The tokenized ids of the text.
- Return type
List[int], torch.Tensor, tf.Tensor or np.ndarray
- encode_plus(text: Union[str, List[str], List[int]], text_pair: Optional[Union[str, List[str], List[int]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, transformers.utils.generic.PaddingStrategy] = False, truncation: Optional[Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy]] = None, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.utils.generic.TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs) transformers.tokenization_utils_base.BatchEncoding [source]¶
Tokenize and prepare for the model a sequence or a pair of sequences.
<Tip warning={true}>
This method is deprecated, __call__ should be used instead.
</Tip>
- Parameters
text (str, List[str] or List[int] (the latter only for not-fast tokenizers)) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
text_pair (str, List[str] or List[int], optional) – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
add_special_tokens (bool, optional, defaults to True) – Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –
Activates and controls padding. Accepts the following values:
True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to False) –
Activates and controls truncation. Accepts the following values:
True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) –
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or [~utils.TensorType], optional) –
If set, will return tensors instead of list of python integers. Acceptable values are:
’tf’: Return TensorFlow tf.constant objects.
’pt’: Return PyTorch torch.Tensor objects.
’np’: Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) –
Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.
[What are token type IDs?](../glossary#token-type-ids)
return_attention_mask (bool, optional) –
Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
[What are attention masks?](../glossary#attention-mask)
return_overflowing_tokens (bool, optional, defaults to False) – Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) – Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) –
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from [PreTrainedTokenizerFast], if using Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) – Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) – Whether or not to print more information and warnings.
**kwargs – passed to the self.tokenize() method
- Returns
A [BatchEncoding] with the following fields:
input_ids – List of token ids to be fed to a model.
[What are input IDs?](../glossary#input-ids)
token_type_ids – List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).
[What are token type IDs?](../glossary#token-type-ids)
attention_mask – List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).
[What are attention masks?](../glossary#attention-mask)
overflowing_tokens – List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).
num_truncated_tokens – Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).
special_tokens_mask – List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length – The length of the inputs (when return_length=True)
- Return type
[BatchEncoding]
- property eos_token: str[source]¶
End of sentence token. Log an error if used while not having been set.
- Type
str
- property eos_token_id: Optional[int][source]¶
Id of the end of sentence token in the vocabulary. Returns None if the token has not been set.
- Type
Optional[int]
- featurize(datapoints: Iterable[Any], log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for datapoints.
- Parameters
datapoints (Iterable[Any]) – A sequence of objects that you’d like to featurize. Subclassses of Featurizer should instantiate the _featurize method that featurizes objects in the sequence.
log_every_n (int, default 1000) – Logs featurization progress every log_every_n steps.
- Returns
A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
- classmethod from_pretrained(pretrained_model_name_or_path: Union[str, os.PathLike], *init_inputs, **kwargs)[source]¶
Instantiate a [~tokenization_utils_base.PreTrainedTokenizerBase] (or a derived class) from a predefined tokenizer.
- Parameters
pretrained_model_name_or_path (str or os.PathLike) –
Can be either:
A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the [~tokenization_utils_base.PreTrainedTokenizerBase.save_pretrained] method, e.g., ./my_model_directory/.
(Deprecated, not applicable to all derived classes) A path or url to a single saved vocabulary file (if and only if the tokenizer only requires a single vocabulary file like Bert or XLNet), e.g., ./my_model_directory/vocab.txt.
cache_dir (str or os.PathLike, optional) – Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the standard cache should not be used.
force_download (bool, optional, defaults to False) – Whether or not to force the (re-)download the vocabulary files and override the cached versions if they exist.
resume_download (bool, optional, defaults to False) – Whether or not to delete incompletely received files. Attempt to resume the download if such a file exists.
proxies (Dict[str, str], optional) – A dictionary of proxy servers to use by protocol or endpoint, e.g., {‘http’: ‘foo.bar:3128’, ‘http://hostname’: ‘foo.bar:4012’}. The proxies are used on each request.
use_auth_token (str or bool, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
local_files_only (bool, optional, defaults to False) – Whether or not to only rely on local files and not to attempt to download any files.
revision (str, optional, defaults to “main”) – The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
subfolder (str, optional) – In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here.
inputs (additional positional arguments, optional) – Will be passed along to the Tokenizer __init__ method.
kwargs (additional keyword arguments, optional) – Will be passed to the Tokenizer __init__ method. Can be used to set special tokens like bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens. See parameters in the __init__ for more details.
<Tip>
Passing use_auth_token=True is required when you want to use a private model.
</Tip>
Examples:
```python # We can’t instantiate directly the base class PreTrainedTokenizerBase so let’s show our examples on a derived class: BertTokenizer # Download vocabulary from huggingface.co and cache. tokenizer = BertTokenizer.from_pretrained(“bert-base-uncased”)
# Download vocabulary from huggingface.co (user-uploaded) and cache. tokenizer = BertTokenizer.from_pretrained(“dbmdz/bert-base-german-cased”)
# If vocabulary files are in a directory (e.g. tokenizer was saved using save_pretrained(‘./test/saved_model/’)) tokenizer = BertTokenizer.from_pretrained(“./test/saved_model/”)
# If the tokenizer uses a single vocabulary file, you can point directly to this file tokenizer = BertTokenizer.from_pretrained(“./test/saved_model/my_vocab.txt”)
# You can link tokens to special vocabulary when instantiating tokenizer = BertTokenizer.from_pretrained(“bert-base-uncased”, unk_token=”<unk>”) # You should be sure ‘<unk>’ is in the vocabulary when doing that. # Otherwise use tokenizer.add_special_tokens({‘unk_token’: ‘<unk>’}) instead) assert tokenizer.unk_token == “<unk>” ```
- get_added_vocab() Dict[str, int] [source]¶
Returns the added tokens in the vocabulary as a dictionary of token to index.
- Returns
The added tokens.
- Return type
Dict[str, int]
- get_special_tokens_mask(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) List[int] [source]¶
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.
- Parameters
token_ids_0 (List[int]) – List of ids of the first sequence.
token_ids_1 (List[int], optional) – List of ids of the second sequence.
already_has_special_tokens (bool, optional, defaults to False) – Whether or not the token list is already formatted with special tokens for the model.
- Returns
1 for a special token, 0 for a sequence token.
- Return type
A list of integers in the range [0, 1]
- get_vocab() Dict[str, int] [source]¶
Returns the vocabulary as a dictionary of token to index.
tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.
- Returns
The vocabulary.
- Return type
Dict[str, int]
- property mask_token: str[source]¶
Mask token, to use when training a model with masked-language modeling. Log an error if used while not having been set.
Roberta tokenizer has a special mask token to be usable in the fill-mask pipeline. The mask token will greedily comprise the space before the <mask>.
- Type
str
- property mask_token_id: Optional[int][source]¶
Id of the mask token in the vocabulary, used when training a model with masked-language modeling. Returns None if the token has not been set.
- Type
Optional[int]
- property max_len_sentences_pair: int[source]¶
The maximum combined length of a pair of sentences that can be fed to the model.
- Type
int
- property max_len_single_sentence: int[source]¶
The maximum length of a sentence that can be fed to the model.
- Type
int
- num_special_tokens_to_add(pair: bool = False) int [source]¶
Returns the number of added tokens when encoding a sequence with special tokens.
<Tip>
This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
</Tip>
- Parameters
pair (bool, optional, defaults to False) – Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence.
- Returns
Number of special tokens added to sequences.
- Return type
int
- pad(encoded_inputs: Union[transformers.tokenization_utils_base.BatchEncoding, List[transformers.tokenization_utils_base.BatchEncoding], Dict[str, List[int]], Dict[str, List[List[int]]], List[Dict[str, List[int]]]], padding: Union[bool, str, transformers.utils.generic.PaddingStrategy] = True, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, return_attention_mask: Optional[bool] = None, return_tensors: Optional[Union[str, transformers.utils.generic.TensorType]] = None, verbose: bool = True) transformers.tokenization_utils_base.BatchEncoding [source]¶
Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length in the batch.
Padding side (left/right) padding token ids are defined at the tokenizer level (with self.padding_side, self.pad_token_id and self.pad_token_type_id).
Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding.
<Tip>
If the encoded_inputs passed are dictionary of numpy arrays, PyTorch tensors or TensorFlow tensors, the result will use the same type unless you provide a different tensor type with return_tensors. In the case of PyTorch tensors, you will lose the specific device of your tensors however.
</Tip>
- Parameters
encoded_inputs ([BatchEncoding], list of [BatchEncoding], Dict[str, List[int]], Dict[str, List[List[int]] or List[Dict[str, List[int]]]) –
Tokenized inputs. Can represent one input ([BatchEncoding] or Dict[str, List[int]]) or a batch of tokenized inputs (list of [BatchEncoding], Dict[str, List[List[int]]] or List[Dict[str, List[int]]]) so you can use this method during preprocessing as well as in a PyTorch Dataloader collate function.
Instead of List[int] you can have tensors (numpy arrays, PyTorch tensors or TensorFlow tensors), see the note above for the return type.
padding (bool, str or [~utils.PaddingStrategy], optional, defaults to True) –
- Select a strategy to pad the returned sequences (according to the model’s padding side and padding
index) among:
True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).
max_length (int, optional) – Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (int, optional) –
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_attention_mask (bool, optional) –
Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
[What are attention masks?](../glossary#attention-mask)
return_tensors (str or [~utils.TensorType], optional) –
If set, will return tensors instead of list of python integers. Acceptable values are:
’tf’: Return TensorFlow tf.constant objects.
’pt’: Return PyTorch torch.Tensor objects.
’np’: Return Numpy np.ndarray objects.
verbose (bool, optional, defaults to True) – Whether or not to print more information and warnings.
- property pad_token: str[source]¶
Padding token. Log an error if used while not having been set.
- Type
str
- property pad_token_id: Optional[int][source]¶
Id of the padding token in the vocabulary. Returns None if the token has not been set.
- Type
Optional[int]
- prepare_for_model(ids: List[int], pair_ids: Optional[List[int]] = None, add_special_tokens: bool = True, padding: Union[bool, str, transformers.utils.generic.PaddingStrategy] = False, truncation: Optional[Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy]] = None, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.utils.generic.TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, prepend_batch_axis: bool = False, **kwargs) transformers.tokenization_utils_base.BatchEncoding [source]¶
Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens. Please Note, for pair_ids different than None and truncation_strategy = longest_first or True, it is not possible to return overflowing tokens. Such a combination of arguments will raise an error.
- Parameters
ids (List[int]) – Tokenized input ids of the first sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
pair_ids (List[int], optional) – Tokenized input ids of the second sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
add_special_tokens (bool, optional, defaults to True) – Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –
Activates and controls padding. Accepts the following values:
True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to False) –
Activates and controls truncation. Accepts the following values:
True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) –
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or [~utils.TensorType], optional) –
If set, will return tensors instead of list of python integers. Acceptable values are:
’tf’: Return TensorFlow tf.constant objects.
’pt’: Return PyTorch torch.Tensor objects.
’np’: Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) –
Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.
[What are token type IDs?](../glossary#token-type-ids)
return_attention_mask (bool, optional) –
Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
[What are attention masks?](../glossary#attention-mask)
return_overflowing_tokens (bool, optional, defaults to False) – Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) – Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) –
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from [PreTrainedTokenizerFast], if using Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) – Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) – Whether or not to print more information and warnings.
**kwargs – passed to the self.tokenize() method
- Returns
A [BatchEncoding] with the following fields:
input_ids – List of token ids to be fed to a model.
[What are input IDs?](../glossary#input-ids)
token_type_ids – List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).
[What are token type IDs?](../glossary#token-type-ids)
attention_mask – List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).
[What are attention masks?](../glossary#attention-mask)
overflowing_tokens – List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).
num_truncated_tokens – Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).
special_tokens_mask – List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length – The length of the inputs (when return_length=True)
- Return type
[BatchEncoding]
- prepare_seq2seq_batch(src_texts: List[str], tgt_texts: Optional[List[str]] = None, max_length: Optional[int] = None, max_target_length: Optional[int] = None, padding: str = 'longest', return_tensors: Optional[str] = None, truncation: bool = True, **kwargs) transformers.tokenization_utils_base.BatchEncoding [source]¶
Prepare model inputs for translation. For best performance, translate one sentence at a time.
- Parameters
src_texts (List[str]) – List of documents to summarize or source language texts.
tgt_texts (list, optional) – List of summaries or target language texts.
max_length (int, optional) – Controls the maximum length for encoder inputs (documents to summarize or source language texts) If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
max_target_length (int, optional) – Controls the maximum length of decoder inputs (target language texts or summaries) If left unset or set to None, this will use the max_length value.
padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –
Activates and controls padding. Accepts the following values:
True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).
return_tensors (str or [~utils.TensorType], optional) –
If set, will return tensors instead of list of python integers. Acceptable values are:
’tf’: Return TensorFlow tf.constant objects.
’pt’: Return PyTorch torch.Tensor objects.
’np’: Return Numpy np.ndarray objects.
truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to True) –
Activates and controls truncation. Accepts the following values:
True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
**kwargs – Additional keyword arguments passed along to self.__call__.
- Returns
A [BatchEncoding] with the following fields:
input_ids – List of token ids to be fed to the encoder.
attention_mask – List of indices specifying which tokens should be attended to by the model.
labels – List of token ids for tgt_texts.
The full set of keys [input_ids, attention_mask, labels], will only be returned if tgt_texts is passed. Otherwise, input_ids, attention_mask will be the only keys.
- Return type
[BatchEncoding]
- push_to_hub(repo_id: str, use_temp_dir: Optional[bool] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, use_auth_token: Optional[Union[bool, str]] = None, max_shard_size: Optional[Union[int, str]] = '10GB', create_pr: bool = False, **deprecated_kwargs) str [source]¶
Upload the tokenizer files to the 🤗 Model Hub while synchronizing a local clone of the repo in repo_path_or_name.
- Parameters
repo_id (str) – The name of the repository you want to push your tokenizer to. It should contain your organization name when pushing to a given organization.
use_temp_dir (bool, optional) – Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.
commit_message (str, optional) – Message to commit while pushing. Will default to “Upload tokenizer”.
private (bool, optional) – Whether or not the repository created should be private.
use_auth_token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.
max_shard_size (int or str, optional, defaults to “10GB”) – Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like “5MB”).
create_pr (bool, optional, defaults to False) – Whether or not to create a PR with the uploaded files or directly commit.
Examples:
```python from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(“bert-base-cased”)
# Push the tokenizer to your namespace with the name “my-finetuned-bert”. tokenizer.push_to_hub(“my-finetuned-bert”)
# Push the tokenizer to an organization with the name “my-finetuned-bert”. tokenizer.push_to_hub(“huggingface/my-finetuned-bert”) ```
- classmethod register_for_auto_class(auto_class='AutoTokenizer')[source]¶
Register this class with a given auto class. This should only be used for custom tokenizers as the ones in the library are already mapped with AutoTokenizer.
<Tip warning={true}>
This API is experimental and may have some slight breaking changes in the next releases.
</Tip>
- Parameters
auto_class (str or type, optional, defaults to “AutoTokenizer”) – The auto class to register this new tokenizer with.
- sanitize_special_tokens() int [source]¶
Make sure that all the special tokens attributes of the tokenizer (tokenizer.mask_token, tokenizer.cls_token, etc.) are in the vocabulary.
Add the missing ones to the vocabulary if needed.
- Returns
The number of tokens added in the vocabulary during the operation.
- Return type
int
- save_pretrained(save_directory: Union[str, os.PathLike], legacy_format: Optional[bool] = None, filename_prefix: Optional[str] = None, push_to_hub: bool = False, **kwargs) Tuple[str] [source]¶
Save the full tokenizer state.
This method make sure the full tokenizer can then be re-loaded using the [~tokenization_utils_base.PreTrainedTokenizer.from_pretrained] class method..
Warning,None This won’t save modifications you may have applied to the tokenizer after the instantiation (for instance, modifying tokenizer.do_lower_case after creation).
- Parameters
save_directory (str or os.PathLike) – The path to a directory where the tokenizer will be saved.
legacy_format (bool, optional) –
Only applicable for a fast tokenizer. If unset (default), will save the tokenizer in the unified JSON format as well as in legacy format if it exists, i.e. with tokenizer specific vocabulary and a separate added_tokens files.
If False, will only save the tokenizer in the unified JSON format. This format is incompatible with “slow” tokenizers (not powered by the tokenizers library), so the tokenizer will not be able to be loaded in the corresponding “slow” tokenizer.
If True, will save the tokenizer in legacy format. If the “slow” tokenizer doesn’t exits, a value error is raised.
filename_prefix – (str, optional): A prefix to add to the names of the files saved by the tokenizer.
push_to_hub (bool, optional, defaults to False) – Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).
kwargs – Additional key word arguments passed along to the [~utils.PushToHubMixin.push_to_hub] method.
- Returns
The files saved.
- Return type
A tuple of str
- save_vocabulary(save_directory: str, filename_prefix: Optional[str] = None) Tuple[str] [source]¶
Save only the vocabulary of the tokenizer (vocabulary + added tokens).
This method won’t save the configuration and special token mappings of the tokenizer. Use [~PreTrainedTokenizerFast._save_pretrained] to save the whole state of the tokenizer.
- Parameters
save_directory (str) – The directory in which to save the vocabulary.
filename_prefix (str, optional) – An optional prefix to add to the named of the saved files.
- Returns
Paths to the files saved.
- Return type
Tuple(str)
- property sep_token: str[source]¶
Separation token, to separate context and query in an input sequence. Log an error if used while not having been set.
- Type
str
- property sep_token_id: Optional[int][source]¶
Id of the separation token in the vocabulary, to separate context and query in an input sequence. Returns None if the token has not been set.
- Type
Optional[int]
- set_truncation_and_padding(padding_strategy: transformers.utils.generic.PaddingStrategy, truncation_strategy: transformers.tokenization_utils_base.TruncationStrategy, max_length: int, stride: int, pad_to_multiple_of: Optional[int])[source]¶
Define the truncation and the padding strategies for fast tokenizers (provided by HuggingFace tokenizers library) and restore the tokenizer settings afterwards.
The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed section.
- Parameters
padding_strategy ([~utils.PaddingStrategy]) – The kind of padding that will be applied to the input
truncation_strategy ([~tokenization_utils_base.TruncationStrategy]) – The kind of truncation that will be applied to the input
max_length (int) – The maximum size of a sequence.
stride (int) – The stride to use when handling overflow.
pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
- slow_tokenizer_class[source]¶
alias of
transformers.models.roberta.tokenization_roberta.RobertaTokenizer
- property special_tokens_map: Dict[str, Union[str, List[str]]][source]¶
A dictionary mapping special token class attributes (cls_token, unk_token, etc.) to their values (‘<unk>’, ‘<cls>’, etc.).
Convert potential tokens of tokenizers.AddedToken type to string.
- Type
Dict[str, Union[str, List[str]]]
- property special_tokens_map_extended: Dict[str, Union[str, tokenizers.AddedToken, List[Union[str, tokenizers.AddedToken]]]][source]¶
A dictionary mapping special token class attributes (cls_token, unk_token, etc.) to their values (‘<unk>’, ‘<cls>’, etc.).
Don’t convert tokens of tokenizers.AddedToken type to string so they can be used to control more finely how special tokens are tokenized.
- Type
Dict[str, Union[str, tokenizers.AddedToken, List[Union[str, tokenizers.AddedToken]]]]
- tokenize(text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) List[str] [source]¶
Converts a string in a sequence of tokens, replacing unknown tokens with the unk_token.
- Parameters
text (str) – The sequence to be encoded.
pair (str, optional) – A second sequence to be encoded with the first.
add_special_tokens (bool, optional, defaults to False) – Whether or not to add the special tokens associated with the corresponding model.
kwargs (additional keyword arguments, optional) – Will be passed to the underlying model specific encode method. See details in [~PreTrainedTokenizerBase.__call__]
- Returns
The list of tokens.
- Return type
List[str]
- train_new_from_iterator(text_iterator, vocab_size, length=None, new_special_tokens=None, special_tokens_map=None, **kwargs)[source]¶
Trains a tokenizer on a new corpus with the same defaults (in terms of special tokens or tokenization pipeline) as the current one.
- Parameters
text_iterator (generator of List[str]) – The training corpus. Should be a generator of batches of texts, for instance a list of lists of texts if you have everything in memory.
vocab_size (int) – The size of the vocabulary you want for your tokenizer.
length (int, optional) – The total number of sequences in the iterator. This is used to provide meaningful progress tracking
new_special_tokens (list of str or AddedToken, optional) – A list of new special tokens to add to the tokenizer you are training.
special_tokens_map (Dict[str, str], optional) – If you want to rename some of the special tokens this tokenizer uses, pass along a mapping old special token name to new special token name in this argument.
kwargs – Additional keyword arguments passed along to the trainer from the 🤗 Tokenizers library.
- Returns
A new tokenizer of the same type as the original one, trained on text_iterator.
- Return type
[PreTrainedTokenizerFast]
- truncate_sequences(ids: List[int], pair_ids: Optional[List[int]] = None, num_tokens_to_remove: int = 0, truncation_strategy: Union[str, transformers.tokenization_utils_base.TruncationStrategy] = 'longest_first', stride: int = 0) Tuple[List[int], List[int], List[int]] [source]¶
Truncates a sequence pair in-place following the strategy.
- Parameters
ids (List[int]) – Tokenized input ids of the first sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
pair_ids (List[int], optional) – Tokenized input ids of the second sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
num_tokens_to_remove (int, optional, defaults to 0) – Number of tokens to remove using the truncation strategy.
truncation_strategy (str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to False) –
The strategy to follow for truncation. Can be:
’longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
’do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
stride (int, optional, defaults to 0) – If set to a positive number, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.
- Returns
The truncated ids, the truncated pair_ids and the list of overflowing tokens. Note: The longest_first strategy returns empty list of overflowing tokens if a pair of sequences (or a batch of pairs) is provided.
- Return type
Tuple[List[int], List[int], List[int]]
- property unk_token: str[source]¶
Unknown token. Log an error if used while not having been set.
- Type
str
RxnFeaturizer¶
- class RxnFeaturizer(tokenizer: transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast, sep_reagent: bool)[source]¶
Reaction Featurizer.
RxnFeaturizer is a wrapper class for HuggingFace’s RobertaTokenizerFast, that is intended for featurizing chemical reaction datasets. The featurizer computes the source and target required for a seq2seq task and applies the RobertaTokenizer on them separately. Additionally, it can also separate or mix the reactants and reagents before tokenizing.
Examples
>>> from deepchem.feat import RxnFeaturizer >>> from transformers import RobertaTokenizerFast >>> tokenizer = RobertaTokenizerFast.from_pretrained("seyonec/PubChem10M_SMILES_BPE_450k") >>> featurizer = RxnFeaturizer(tokenizer, sep_reagent=True) >>> feats = featurizer.featurize(['CCS(=O)(=O)Cl.OCCBr>CCN(CC)CC.CCOCC>CCS(=O)(=O)OCCBr'])
Notes
The featurize method expects a List of reactions.
- Use the sep_reagent toggle to enable/disable reagent separation.
True - Separate the reactants and reagents
False - Mix the reactants and reagents
- __init__(tokenizer: transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast, sep_reagent: bool)[source]¶
Initialize a ReactionFeaturizer object.
- Parameters
tokenizer (RobertaTokenizerFast) – HuggingFace Tokenizer to be used for featurization.
sep_reagent (bool) – Toggle to separate or mix the reactants and reagents.
- featurize(datapoints: Iterable[Any], log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for datapoints.
- Parameters
datapoints (Iterable[Any]) – A sequence of objects that you’d like to featurize. Subclassses of Featurizer should instantiate the _featurize method that featurizes objects in the sequence.
log_every_n (int, default 1000) – Logs featurization progress every log_every_n steps.
- Returns
A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
BindingPocketFeaturizer¶
- class BindingPocketFeaturizer[source]¶
Featurizes binding pockets with information about chemical environments.
In many applications, it’s desirable to look at binding pockets on macromolecules which may be good targets for potential ligands or other molecules to interact with. A BindingPocketFeaturizer expects to be given a macromolecule, and a list of pockets to featurize on that macromolecule. These pockets should be of the form produced by a dc.dock.BindingPocketFinder, that is as a list of dc.utils.CoordinateBox objects.
The base featurization in this class’s featurization is currently very simple and counts the number of residues of each type present in the pocket. It’s likely that you’ll want to overwrite this implementation for more sophisticated downstream usecases. Note that this class’s implementation will only work for proteins and not for other macromolecules
Note
This class requires mdtraj to be installed.
- featurize(protein_file: str, pockets: List[deepchem.utils.coordinate_box_utils.CoordinateBox]) numpy.ndarray [source]¶
Calculate atomic coodinates.
- Parameters
protein_file (str) – Location of PDB file. Will be loaded by MDTraj
pockets (List[CoordinateBox]) – List of dc.utils.CoordinateBox objects.
- Returns
A numpy array of shale (len(pockets), n_residues)
- Return type
np.ndarray
UserDefinedFeaturizer¶
- class UserDefinedFeaturizer(feature_fields)[source]¶
Directs usage of user-computed featurizations.
- featurize(datapoints: Iterable[Any], log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for datapoints.
- Parameters
datapoints (Iterable[Any]) – A sequence of objects that you’d like to featurize. Subclassses of Featurizer should instantiate the _featurize method that featurizes objects in the sequence.
log_every_n (int, default 1000) – Logs featurization progress every log_every_n steps.
- Returns
A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
DummyFeaturizer¶
- class DummyFeaturizer[source]¶
Class that implements a no-op featurization. This is useful when the raw dataset has to be used without featurizing the examples. The Molnet loader requires a featurizer input and such datasets can be used in their original form by passing the raw featurizer.
Examples
>>> import deepchem as dc >>> smi_map = [["N#C[S-].O=C(CBr)c1ccc(C(F)(F)F)cc1>CCO.[K+]", "N#CSCC(=O)c1ccc(C(F)(F)F)cc1"], ["C1COCCN1.FCC(Br)c1cccc(Br)n1>CCN(C(C)C)C(C)C.CN(C)C=O.O", "FCC(c1cccc(Br)n1)N1CCOCC1"]] >>> Featurizer = dc.feat.DummyFeaturizer() >>> smi_feat = Featurizer.featurize(smi_map) >>> smi_feat array([['N#C[S-].O=C(CBr)c1ccc(C(F)(F)F)cc1>CCO.[K+]', 'N#CSCC(=O)c1ccc(C(F)(F)F)cc1'], ['C1COCCN1.FCC(Br)c1cccc(Br)n1>CCN(C(C)C)C(C)C.CN(C)C=O.O', 'FCC(c1cccc(Br)n1)N1CCOCC1']], dtype='<U55')
- featurize(datapoints: Iterable[Any], log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Passes through dataset, and returns the datapoint.
- Parameters
datapoints (Iterable[Any]) – A sequence of objects that you’d like to featurize.
- Returns
datapoints – A numpy array containing a featurized representation of the datapoints.
- Return type
np.ndarray
Base Featurizers (for develop)¶
Featurizer¶
The dc.feat.Featurizer
class is the abstract parent class for all featurizers.
- class Featurizer[source]¶
Abstract class for calculating a set of features for a datapoint.
This class is abstract and cannot be invoked directly. You’ll likely only interact with this class if you’re a developer. In that case, you might want to make a child class which implements the _featurize method for calculating features for a single datapoints if you’d like to make a featurizer for a new datatype.
- featurize(datapoints: Iterable[Any], log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for datapoints.
- Parameters
datapoints (Iterable[Any]) – A sequence of objects that you’d like to featurize. Subclassses of Featurizer should instantiate the _featurize method that featurizes objects in the sequence.
log_every_n (int, default 1000) – Logs featurization progress every log_every_n steps.
- Returns
A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
MolecularFeaturizer¶
If you’re creating a new featurizer that featurizes molecules,
you will want to inherit from the abstract MolecularFeaturizer
base class.
This featurizer can take RDKit mol objects or SMILES as inputs.
- class MolecularFeaturizer(use_original_atoms_order=False)[source]¶
Abstract class for calculating a set of features for a molecule.
The defining feature of a MolecularFeaturizer is that it uses SMILES strings and RDKit molecule objects to represent small molecules. All other featurizers which are subclasses of this class should plan to process input which comes as smiles strings or RDKit molecules.
Child classes need to implement the _featurize method for calculating features for a single molecule.
The subclasses of this class require RDKit to be installed.
- __init__(use_original_atoms_order=False)[source]¶
- Parameters
use_original_atoms_order (bool, default False) – Whether to use original atom ordering or canonical ordering (default)
- featurize(datapoints, log_every_n=1000, **kwargs) numpy.ndarray [source]¶
Calculate features for molecules.
- Parameters
datapoints (rdkit.Chem.rdchem.Mol / SMILES string / iterable) – RDKit Mol, or SMILES string or iterable sequence of RDKit mols/SMILES strings.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
MaterialCompositionFeaturizer¶
If you’re creating a new featurizer that featurizes compositional formulas,
you will want to inherit from the abstract MaterialCompositionFeaturizer
base class.
- class MaterialCompositionFeaturizer[source]¶
Abstract class for calculating a set of features for an inorganic crystal composition.
The defining feature of a MaterialCompositionFeaturizer is that it operates on 3D crystal chemical compositions. Inorganic crystal compositions are represented by Pymatgen composition objects. Featurizers for inorganic crystal compositions that are subclasses of this class should plan to process input which comes as Pymatgen composition objects.
This class is abstract and cannot be invoked directly. You’ll likely only interact with this class if you’re a developer. Child classes need to implement the _featurize method for calculating features for a single crystal composition.
Note
Some subclasses of this class will require pymatgen and matminer to be installed.
- featurize(datapoints: Optional[Iterable[str]] = None, log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for crystal compositions.
- Parameters
datapoints (Iterable[str]) – Iterable sequence of composition strings, e.g. “MoS2”.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of compositions.
- Return type
np.ndarray
MaterialStructureFeaturizer¶
If you’re creating a new featurizer that featurizes inorganic crystal structure,
you will want to inherit from the abstract MaterialCompositionFeaturizer
base class.
This featurizer can take pymatgen structure objects or dictionaries as inputs.
- class MaterialStructureFeaturizer[source]¶
Abstract class for calculating a set of features for an inorganic crystal structure.
The defining feature of a MaterialStructureFeaturizer is that it operates on 3D crystal structures with periodic boundary conditions. Inorganic crystal structures are represented by Pymatgen structure objects. Featurizers for inorganic crystal structures that are subclasses of this class should plan to process input which comes as pymatgen structure objects.
This class is abstract and cannot be invoked directly. You’ll likely only interact with this class if you’re a developer. Child classes need to implement the _featurize method for calculating features for a single crystal structure.
Note
Some subclasses of this class will require pymatgen and matminer to be installed.
- featurize(datapoints: Optional[Iterable[Union[Dict[str, Any], Any]]] = None, log_every_n: int = 1000, **kwargs) numpy.ndarray [source]¶
Calculate features for crystal structures.
- Parameters
datapoints (Iterable[Union[Dict, pymatgen.core.Structure]]) – Iterable sequence of pymatgen structure dictionaries or pymatgen.core.Structure. Please confirm the dictionary representations of pymatgen.core.Structure from https://pymatgen.org/pymatgen.core.structure.html.
log_every_n (int, default 1000) – Logging messages reported every log_every_n samples.
- Returns
features – A numpy array containing a featurized representation of datapoints.
- Return type
np.ndarray
ComplexFeaturizer¶
If you’re creating a new featurizer that featurizes a pair of ligand molecules and proteins,
you will want to inherit from the abstract ComplexFeaturizer
base class.
This featurizer can take a pair of PDB or SDF files which contain ligand molecules and proteins.
- class ComplexFeaturizer[source]¶
” Abstract class for calculating features for mol/protein complexes.
- featurize(datapoints: Optional[Iterable[Tuple[str, str]]] = None, log_every_n: int = 100, **kwargs) numpy.ndarray [source]¶
Calculate features for mol/protein complexes. :param datapoints: List of filenames (PDB, SDF, etc.) for ligand molecules and proteins.
Each element should be a tuple of the form (ligand_filename, protein_filename).
- Returns
features – Array of features
- Return type
np.ndarray
VocabularyBuilder¶
If you’re creating a vocabulary builder for generating vocabulary from a corpus or input data,
the vocabulary builder must inhere from VocabularyBuilder
base class.
HuggingFaceVocabularyBuilder¶
A wrapper class for building vocabulary from algorithms implemented in tokenizers library.