respy.shared
Contains functions which are shared across other modules.
This module should only import from other packages or modules of respy which also do not import from respy itself. This is to prevent circular imports.
aggregate_keane_wolpin_utility(wage, nonpec, continuation_value, draw, delta)
aggregate_keane_wolpin_utility
Calculate the utility of Keane and Wolpin models.
create_base_draws(shape, seed, monte_carlo_sequence)
create_base_draws
Create a set of draws from the standard normal distribution.
transform_base_draws_with_cholesky_factor(draws, choice_set, shocks_cholesky, optim_paras)
transform_base_draws_with_cholesky_factor
Transform standard normal draws with the Cholesky factor.
generate_column_dtype_dict_for_estimation(optim_paras)
generate_column_dtype_dict_for_estimation
Generate column labels for data necessary for the estimation.
downcast_to_smallest_dtype(series, downcast_options=None)
downcast_to_smallest_dtype
Downcast the dtype of a pandas.Series to the lowest possible dtype.
pandas.Series
compute_covariates(df, definitions, check_nans=False, raise_errors=True)
compute_covariates
Compute covariates.
convert_labeled_variables_to_codes(df, optim_paras)
convert_labeled_variables_to_codes
Convert labeled variables to codes.
rename_labels_to_internal(x)
rename_labels_to_internal
Shorten labels and convert them to lower-case.
rename_labels_from_internal(x)
rename_labels_from_internal
normalize_probabilities(probabilities)
normalize_probabilities
Normalize probabilities such that their sum equals one.
calculate_value_functions_and_flow_utilities(wage, nonpec, continuation_value, draw, delta, value_function, flow_utility)
calculate_value_functions_and_flow_utilities
Calculate the choice-specific value functions and flow utilities.
create_core_state_space_columns(optim_paras)
create_core_state_space_columns
Create internal column names for the core state space.
create_dense_state_space_columns(optim_paras)
create_dense_state_space_columns
Create internal column names for the dense state space.
create_dense_choice_state_space_columns(optim_paras)
create_dense_choice_state_space_columns
create_state_space_columns(optim_paras)
create_state_space_columns
Create names of state space dimensions excluding the period and identifier.
calculate_expected_value_functions(wages, nonpecs, continuation_values, draws, delta, expected_value_functions)
calculate_expected_value_functions
Calculate the expected maximum of value functions for a set of unobservables.
convert_dictionary_keys_to_dense_indices(dictionary)
convert_dictionary_keys_to_dense_indices
Convert the keys to tuples containing integers.
subset_cholesky_factor_to_choice_set(cholesky_factor, choice_set)
subset_cholesky_factor_to_choice_set
Subset the Cholesky factor to dimensions required by the admissible choice set.
return_core_dense_key(core_idx, dense=False)
return_core_dense_key
Return core dense keys in the right format.
pandas_dot(x, beta, out=None)
pandas_dot
Compute the dot product for a DataFrame and a Series.
map_observations_to_states(states, state_space, optim_paras)
map_observations_to_states
Map observations in data to states.
_map_observations_to_core_states_numba(core, indexer)
_map_observations_to_core_states_numba
Map observations to states in Numba.
_map_observations_to_dense_index(dense, core_index, dense_covariates_to_dense_index, core_key_and_dense_index_to_dense_key)
_map_observations_to_dense_index
dump_states(states, complex_, options)
dump_states
Dump states.
load_states(complex_, options)
load_states
Load states.
_create_file_name_from_complex_index(complex_)
_create_file_name_from_complex_index
Create a file name from a complex index.
prepare_cache_directory(options)
prepare_cache_directory
Prepare cache directory.
select_valid_choices(choices, choice_set)
select_valid_choices
Select valid choices.
respy.shared.
Note that the function works for working and non-working alternatives as wages are set to one for non-working alternatives such that the draws enter the utility function additively.
float
Value of the wage component. Note that for non-working alternatives this value is actually zero, but to simplify computations it is set to one.
Value of the non-pecuniary component.
Value of the continuation value which is the expected present-value of the following state.
The shock which enters the enters the reward of working alternatives multiplicatively and of non-working alternatives additively.
The discount factor to calculate the present value of continuation values.
The expected present value of an alternative.
The immediate reward of an alternative.
The draws are either drawn randomly or from quasi-random low-discrepancy sequences, i.e., Sobol or Halton.
“random” is used to draw random standard normal shocks for the Monte Carlo integrations or because individuals face random shocks in the simulation.
“halton” or “sobol” can be used to change the sequence for two Monte Carlo integrations. First, the calculation of the expected value function (EMAX) in the solution and the choice probabilities in the maximum likelihood estimation.
For the solution and estimation it is necessary to have the same randomness in every iteration. Otherwise, there is chatter in the simulation, i.e. a difference in simulated values not only due to different parameters but also due to draws (see 10.5 in [1]). At the same time, the variance-covariance matrix of the shocks is estimated along all other parameters and changes every iteration. Thus, instead of sampling draws from a varying multivariate normal distribution, standard normal draws are sampled here and transformed to the distribution specified by the parameters in transform_base_draws_with_cholesky_factor().
transform_base_draws_with_cholesky_factor()
tuple
int
Tuple representing the shape of the resulting array.
Seed to control randomness.
Name of the sequence.
numpy.ndarray
Array with shape (n_choices, n_draws, n_choices).
See also
References
Train, K. (2009). Discrete Choice Methods with Simulation. Cambridge: Cambridge University Press.
Lemieux, C. (2009). Monte Carlo and Quasi-Monte Carlo Sampling. New York: Springer Verlag New York.
The standard normal draws are transformed to normal draws with variance-covariance matrix \(\Sigma\) by multiplication with the Cholesky factor \(L\) where \(L^TL = \Sigma\). See chapter 7.4 in [1] for more information.
This function relates to create_base_draws() in the sense that it transforms the unchanging standard normal draws to the distribution with the variance-covariance matrix specified by the parameters.
create_base_draws()
Gentle, J. E. (2009). Computational statistics (Vol. 308). New York: Springer.
By default, variables are converted to signed or unsigned integers. Use "float" to cast variables from float64 to float32.
"float"
float64
float32
Be aware that NumPy integers silently overflow which is why conversion to low dtypes should be done after calculations. For example, using numpy.uint8 for an array and squaring the elements leads to silent overflows for numbers higher than 255.
numpy.uint8
For more information on the dtype boundaries see the NumPy documentation under https://docs.scipy.org/doc/numpy-1.17.0/user/basics.types.html.
The function iterates over the definitions of covariates and tries to compute them. It keeps track on how many covariates still need to be computed and stops if the number does not change anymore. This might be due to missing information.
pandas.DataFrame
DataFrame with some, maybe not all state space dimensions like period, experiences.
dict
Keys represent covariates and values are strings passed to df.eval.
df.eval
False
Perform a check whether the variables used to compute the selected covariate do not contain any np.nan. This is necessary in respy.simulate._sample_characteristic() where some characteristics may contain missings.
respy.simulate._sample_characteristic()
True
Whether to raise errors if variables cannot be computed. This option is necessary for, e.g., _sample_characteristic() where not all necessary variables exist and it is not easy to exclude covariates which depend on them.
_sample_characteristic()
DataFrame with shape (n_states, n_covariates).
Exception
If variables cannot be computed and raise_errors is true.
raise_errors
We need to check choice variables and observables for potential labels. The mapping from labels to code can be inferred from the order in optim_paras.
optim_paras
Examples
The following probs do not sum to one after dividing by the sum.
>>> probs = np.array([0.3775843411510946, 0.5384246942799851, 0.6522988820635421]) >>> normalize_probabilities(probs) array([0.24075906, 0.34331568, 0.41592526])
To apply aggregate_keane_wolpin_utility() to arrays with arbitrary dimensions, this function uses numba.guvectorize(). One cannot use numba.vectorize() because it does not support multiple return values.
aggregate_keane_wolpin_utility()
numba.guvectorize()
numba.vectorize()
The function takes an agent and calculates the utility for each of the choices, the ex-post rewards, with multiple draws from the distribution of unobservables and adds the discounted expected maximum utility of subsequent periods resulting from choices. Averaging over all maximum utilities yields the expected maximum utility of this state.
The underlying process in this function is called Monte Carlo integration. The goal is to approximate an integral by evaluating the integrand at randomly chosen points. In this setting, one wants to approximate the expected maximum utility of the current state.
Note that wages have the same length as nonpecs despite that wages are only available in some choices. Missing choices are filled with ones. In the case of a choice with wage and without wage, flow utilities are
wages
nonpecs
Array with shape (n_choices,) containing wages.
Array with shape (n_choices,) containing non-pecuniary rewards.
Array with shape (n_choices,) containing expected maximum utility for each choice in the subsequent period.
Array with shape (n_draws, n_choices).
The discount factor.
Expected maximum utility of an agent.
>>> dictionary = {(0.0, 1): 0, 2: 1} >>> convert_dictionary_keys_to_dense_indices(dictionary) {(0, 1): 0, (2,): 1}
>>> m = np.arange(9).reshape(3, 3) >>> subset_cholesky_factor_to_choice_set(m, (False, True, False)) array([[4]])
The function computes each product in the dot product separately to limit the impact of converting a Series to an array.
To access the NumPy array, .values is used instead of .to_numpy() because it is faster and the latter avoids problems for extension arrays which are not used here.
A DataFrame containing the covariates of the dot product.
A Series containing the parameters or coefficients of the dot product.
An output array can be passed to the function which is filled instead of allocating a new array.
Array with shape len(x) which contains the solution of the dot product.
>>> x = pd.DataFrame(np.arange(10).reshape(5, 2), columns=list("ab")) >>> beta = pd.Series([1, 2], index=list("ab")) >>> x.dot(beta).to_numpy() array([ 2, 8, 14, 20, 26]... >>> pandas_dot(x, beta) array([ 2., 8., 14., 20., 26.])
The directory contains the parts of the state space.
>>> select_valid_choices(list("abcde"), (1, 0, 1, 0, 1)) ['a', 'c', 'e'] >>> select_valid_choices(list("abc"), (0, 1, 0, 1, 0)) ['b']
respy.parallelization
respy.simulate