qleet.analyzers package

Submodules

qleet.analyzers.entanglement module

Module to evaluate the achievable entanglement in circuits.

class qleet.analyzers.entanglement.EntanglementCapability(circuit: CircuitDescriptor, noise_model: Optional[Union[cirq.devices.noise_model.NoiseModel, qiskit.providers.aer.noise.NoiseModel, pyquil.noise.NoiseModel]] = None, samples: int = 1000)[source]

Bases: MetaExplorer

Calculates entangling capability of a parameterized quantum circuit

entanglement_capability(measure: str = 'meyer-wallach', shots: int = 1024) float[source]

Returns entanglement measure for the given circuit

Parameters
  • measure – specification for the measure used in the entangling capability

  • shots – number of shots for circuit execution

Returns pqc_entangling_capability (float)

entanglement measure value

Raises

ValueError – if invalid measure is specified

gen_params() Tuple[List, List][source]

Generate parameters for the calculation of expressibility

Return theta (np.array)

first list of parameters for the parameterized quantum circuit

Return phi (np.array)

second list of parameters for the parameterized quantum circuit

meyer_wallach_measure(states, num_qubits)[source]

Returns the meyer-wallach entanglement measure for the given circuit.

\[Q = \frac{2}{|\vec{\theta}|}\sum_{\theta_{i}\in \vec{\theta}} \Bigg(1-\frac{1}{n}\sum_{k=1}^{n}Tr(\rho_{k}^{2}(\theta_{i}))\Bigg)\]
static scott_helper(state, perms)[source]

Helper function for entanglement measure. It gives trace of the output state

scott_measure(states, num_qubits)[source]

Returns the scott entanglement measure for the given circuit.

\[Q_{m} = \frac{2^{m}}{(2^{m}-1) |\vec{\theta}|}\sum_{\theta_i \in \vec{\theta}}\ \bigg(1 - \frac{m! (n-m)!)}{n!}\sum_{|S|=m} \text{Tr} (\rho_{S}^2 (\theta_i)) \bigg)\ \quad m= 1, \ldots, \lfloor n/2 \rfloor\]

qleet.analyzers.expressibility module

Module to evaluate the expressibility of circuits.

class qleet.analyzers.expressibility.Expressibility(circuit: CircuitDescriptor, noise_model: Optional[Union[cirq.devices.noise_model.NoiseModel, qiskit.providers.aer.noise.NoiseModel, pyquil.noise.NoiseModel]] = None, samples: int = 1000)[source]

Bases: MetaExplorer

Calculates expressibility of a parameterized quantum circuit

expressibility(measure: str = 'kld', shots: int = 1024) float[source]

Returns expressibility for the circuit

\[\begin{split}Expr = D_{KL}(\hat{P}_{PQC}(F; \theta) | P_{Haar}(F))\\ Expr = D_{\sqrt{JSD}}(\hat{P}_{PQC}(F; \theta) | P_{Haar}(F))\end{split}\]
Parameters
  • measure – specification for the measure used in the expressibility calculation

  • shots – number of shots for circuit execution

Returns pqc_expressibility

float, expressibility value

Raises

ValueError – if invalid measure is specified

gen_params() Tuple[List, List][source]

Generate parameters for the calculation of expressibility

Returns theta (np.array)

first list of parameters for the parameterized quantum circuit

Returns phi (np.array)

second list of parameters for the parameterized quantum circuit

static kl_divergence(prob_a: numpy.ndarray, prob_b: numpy.ndarray) float[source]

Returns KL divergence between two probabilities

plot(figsize=(6, 4), dpi=300, **kwargs)[source]

Returns plot for expressibility visualization

prob_haar() numpy.ndarray[source]

Returns probability density function of fidelities for Haar Random States

prob_pqc(shots: int = 1024) numpy.ndarray[source]

Return probability density function of fidelities for PQC

Parameters

shots – number of shots for circuit execution

Returns fidelities (np.array)

np.array of fidelities

qleet.analyzers.loss_landscape module

Module to plot the loss landscapes of circuits.

For any variational quantum algorithm being trained to optimize on a given metric, the plot of a projected subspace of the metric is of value because it helps us confirm along random axes that our point is indeed the local minima / maxima and also helps visualize how rough the landscape is giving clues on how likely the variational models might converge.

We hope that these visualizations can help improve the choice of optimizers and ansatz we have for these quantum circuits.

class qleet.analyzers.loss_landscape.LossLandscapePlotter(solver: PQCSimulatedTrainer, metric: MetricSpecifier, dim: int = 2)[source]

Bases: MetaExplorer

This class plots the loss landscape for a given PQC trainer object.

It can plot the true loss that we are training on or on some other metric, this can help use proxy metrics as loss functions and seeing if they help optimize on the true target metric.

These plots can support 1-D and 2-D subspace projections for now, since we have to plot the loss value on the second or third axis. A 3-D projection of the plot will also be supported by v1.0.0 and onwards, which will use colors and point density to show the metric values.

plot(mode: str = 'surface', points: int = 25, distance: float = numpy.pi) plotly.graph_objects.Figure[source]

Plots the loss landscape The surface plot is the best 3D visualization, but it uses the plotly dynamic interface, it also has an overhead contour. For simple 2D plots which can be used as matplotlib graphics or easily used in publications, use line and contour modes.

Parameters
  • mode (str) – line, contour or surface, what type of plot do we want?

  • points (int) – number of points to sample for the metric

  • distance (float) – the range around the current parameters that we need to sample to

Returns

The figure object that has been generated

Return type

Plotly or matplotlib figure object

Raises

NotImplementedError – For the 1D plotting. TODO Implement 1D plots.

Increasing the number of points improves the quality of the plot but takes a lot more time, it scales quadratically in the number of points. Lowering the distance is a good idea if using fewer points, since you get the same number of points for a small region. Note that these plots can be deceptive, there might be large ridges that get missed due to lack of resolution of the points, always be careful and try to use as many points as possible before making a final inference.

scan(points: int, distance: float, origin: numpy.ndarray) Tuple[numpy.ndarray, numpy.ndarray][source]

Scans the target vector-subspace for values of the metric Returns the sampled coordinates in the grid and the values of the metric at those coordinates. The sampling of the subspace is done uniformly, and evenly in all directions.

Parameters
  • points (int) – Number of points to sample

  • distance (float) – The range of parameters around the current value to scan over

  • origin (np.ndarray) – The value of the current parameter to be used as origin of our plot

Returns

tuple of the coordinates and the metric values at those coordinates

Return type

a tuple of np.array, shapes being (n, dims) and (n,)

qleet.analyzers.training_path module

Module responsible to generating plots of the training trajectory.

The training trajectory is the set of parameter values (projected down to some low dimensional space) that the model had through the different epochs of it’s training process. This when plotted for one model tells us if the loss was decreasing always, if the learning rate should be lowered, increased, or what the schedule should look like, etc. When plotted for more than one model, it let’s us know if the paths are converging or not, giving us a view of how likely is our generated solution optimal. If many of the models converge to the same path and start mixing, then they likely are optimal, if not they they are more likely to be just random chance solutions.

class qleet.analyzers.training_path.LossLandscapePathPlotter(base_plotter: LossLandscapePlotter)[source]

Bases: MetaLogger

An module to plot the training path of the PQC on the loss landscape

This class is an extension of the Loss Landscape plotter and the Training Path plotter, puts both the ideas together and shows how the different models ended us at different parts of the loss landscape.

log(solver: PQCSimulatedTrainer, loss: float)[source]

Logs the value of the parameters that the circuit currently has. The parameter values should be a numpy vector.

Parameters
  • solver (PQCSimulatedTrainer) – The trainer module which has the parameters to be plotted

  • loss (float) – The value of the loss at the current epoch

plot()[source]

Plots the 2D parameter projections with the loss value on the 3rd dimension. For the entire set of runs, the class has logged the parameter values. Now it reduces the dimensionality of those parameter vectors using PCA or tSNE and then plots them on a 2D plane, associates them with a loss value to put on the third dimension. This output is coupled with the actual loss landscape drawing and returned.

Returns

The figure on which the parameter projections are plotted

Return type

Plotly figure

class qleet.analyzers.training_path.OptimizationPathPlotter(mode: str = 'tSNE')[source]

Bases: MetaLogger

Class which logs the parameter information and plots it over the iterations of training.

This will be used to plot the parameter values in 2-D or 3-D, rather a t-SNE or PCA projection or the parameter values. For getting the loss values for the associated training points as a part of the plot too, see LossLandscapePathPlotter.

This class conforms to the MetaLogger interface and can be used as part of an AnalyzerList when plotting the training properties of a circuit.

log(solver: PQCSimulatedTrainer, _loss: float) None[source]

Logs the value of the parameters that the circuit currently has. The parameter values should be a numpy vector.

Parameters
  • solver (PQCSimulatedTrainer) – The trainer module which has the parameters to be plotted

  • _loss (float) – The loss value at that epoch, not used by this class

plot() plotly.graph_objects.Figure[source]

Plots the 2D parameter projections. For the entire set of runs, the class has logged the parameter values. Now it reduces the dimensionality of those parameter vectors using PCA or tSNE and then plots them on a 2D plane.

Returns

The figure on which the parameter projections are plotted

Return type

Plotly figure

qleet.analyzers.histogram module

Module contents