Generation of Conditional Random Variables

I am updating my question as it seems before it was a bit confusing
I hope this is clear now (in case it is not clear, please ask and I will clarify):
Assume I have 16 random variables X1,..,X16 and 16 constant numbers C1,..,C16.
I want to generate samples of (X1+...+X16), considering that X1,..,X16 are random variables (1 to 8 are normally distributed and 9 to 16 are weibull) and the generated samples must always respect the following constraints:
X1/(X1+ ... +X16)=C1
...
X16/(X1 + ... + X16)=C16
where C1,...C16 are constant values and are known prior to the generation of the RVs.
and C1+...+C16 is always 1 and always positive numbers).
Thanks a lot for any suggestions

2 comentarios

Cris LaPierre
Cris LaPierre el 3 de En. de 2021
What do you mean by respecting these weights?
Conrado Neto
Conrado Neto el 3 de En. de 2021
Thanks for reading my problem
I mean that I need to generate more samples for all those random variables that have those proportions between themselves. Different new samples, same proportion between all the generated samples

Iniciar sesión para comentar.

 Respuesta aceptada

Bruno Luong
Bruno Luong el 9 de En. de 2021
Editada: Bruno Luong el 9 de En. de 2021
I don't quite understand what you want but from your request you seem to compute
X1 = C1 * S
X2 = C2 * S
...
X16 = C16 * S
With S being the sum of X_i. Therefore you just need to generate S (since all C are known).
If you know the pdf of X_i and the in principle you would be able to compute the theoretical pdf of S.
For example if X_i are independent and normal distribution with the same standard deviation of sigma, then S is normal distribution with standard deviation of sigma*sqrt(16).
Then all you need is generate randomly S from its pdf, then apply the above scalling then your are done.
If you are not able to compute the theoretical pdf of sum of mixed normal and weibull (admidly not trivial), the sum converges toward the normal distribution with
meanS := mean(S) = sum(mean(X_i))
stdS := std(S) = sqrt(sum(std(X_i)^2))
IMO if you generate S as
S = meanS + stdS*randn()
then
X = C * S
C being the array of 16 known values, it would well fit your need.

34 comentarios

Conrado Neto
Conrado Neto el 9 de En. de 2021
Thanks for your reply Bruno I can generate the distribution of S if it was simply the sum of X_i. However, in my case, I need S that is the sum of X_i respecting the conditions: X1/(X1+..+X16)=C1 ... X16/(X1+..X16)=C16 Where C1,..,C16 are constant (known) values, not RVs.
Because of those conditions, I cannot simply generate S as the sum of X_i because these generated values of S wouldn't respect those conditions. I need to generate values of S that are a sum of X_i and that Always respect those conditions
Conrado Neto
Conrado Neto el 9 de En. de 2021
Ok, can you clarify a few points for me. I can generate the theoretical pdf of the sum of normal and weibull RVs, and it is not a normal distribution from my results. How do I consider those conditions that X1/(X1+..+X16)=C1 and so on after that?
Conrado Neto
Conrado Neto el 9 de En. de 2021
Because if I generate values of S from its distribution as the sum of the normal and weibull RVs, and then to find X1 (for example) I just multiply the samples of S by C1 that is not correct.
That's because the C1 condition affect S too.. so I cannot just generate S ignoring C1..C16 and then just multiply
Bruno Luong
Bruno Luong el 9 de En. de 2021
Editada: Bruno Luong el 9 de En. de 2021
You are correct. What I suggest is not rigourous.
If you want a correct method, consider this system
M = eye(16)-C(:).*ones(16);
M*X = 0
For C such that sum(C)==1, this system has rank-1 solution
X = s * x;
where x=null(M)=C, and s is a free parameter in R.
You need to compute theoretically the pdf of s, as
prod(PDF_i(s*C(i)))
PDF_i is pdf of (X_i)
Once you have the pdf you can generate s, then X = s*C.
Conrado Neto
Conrado Neto el 9 de En. de 2021
Exactly, so now the question is how to generate the PDF of t theoretically
Bruno Luong
Bruno Luong el 9 de En. de 2021
Editada: Bruno Luong el 9 de En. de 2021
prod(PDF_i(s*C(i)))
then normailize it.
Example
if
Xi ~ N(mu,sigma), then
PDF_i(x) = -1/(sigma*2*pi) exp(-1/2*((x-mu)/sigma)^2)
PDF(t) ~ prod(exp(-1/2*((s*C(i)-mu)/sigma)^2))
~ exp(-1/2*sum((s*C(i)-mu)^2)/sigma^2)
Which is also a normal distribution.
Conrado Neto
Conrado Neto el 9 de En. de 2021
Ok, I am trying to understand your solution but I am struggling a bit on the notations and 'what is what'.
Can we start over and find a mathematically rigorous approach for my problem, which is:
I want to generate samples of (X1+...+X16), considering that X1,..,X16 are random variables (1 to 8 are normally distributed and 9 to 16 are weibull) and the generated samples must always respect the following constraints:
X1/(X1+ ... +X16)=C1
...
X16/(X1 + ... + X16)=C16
where C1,...C16 are NOT random variables and are known prior to the generation of the RVs.
and C1+...+C16 is always 1 (and C1,...,C16 are always positive numbers).
I would really appreciate your help as I still havent solved my problem properly yet
Bruno Luong
Bruno Luong el 9 de En. de 2021
Editada: Bruno Luong el 9 de En. de 2021
This is what exactly I understood.
Your equations
X1/(X1+ ... +X16)=C1
...
X16/(X1 + ... + X16)=C16
can be rewritten as
M*X = 0
with
M := eye(16)-C(:).*ones(16);
With X is a random (16 x 1) vector variable. So you want to generate the conditional probability
P(X | M*X=0)
The rest is just compute the joint-probability of 16 variables and then restrict it in the set
{X : M*X = 0} = { X = s*C, for s in R }.
I don't have anything to change on the method I have already described, you just need follow it. The nasty part is the mix between Gaussian/Weibull that you might not be able to simplify as if they are all Gaussian as the example I have shown.
Conrado Neto
Conrado Neto el 9 de En. de 2021
thanks for your help, I am not sure I will be able to solve the "nasty part" as you mentioned, but I will try and report here.
Bruno Luong
Bruno Luong el 9 de En. de 2021
If you cannot simplify it solve it analytically to reverse the cdf (needed to generate s), you can always do a numerical discretization of the pdf and generate random followed an numerical approximation of the pdf.
Conrado Neto
Conrado Neto el 10 de En. de 2021
yeah, numerically it would work but then it gets to the computational time issue as I have to do this type of analysis several times
Bruno Luong
Bruno Luong el 10 de En. de 2021
Did you do it or you only suspect there would be a runtime issue?
Bruno Luong
Bruno Luong el 10 de En. de 2021
Editada: Bruno Luong el 10 de En. de 2021
Here is the demo code, adapt to your need. The not well defined partt is the range to discretized the pdf of s (I suppose if you assume all the pdf of X are Gaussian, then you could well estimate the range from the mean and standard deviation of X, similar to the central limit theorem.)
Gausspdf = @(x,mu,sigma) 1./(sigma*sqrt(2*pi))*exp(-(x-mu).^2./(2*sigma^2));
WeiBullpdf = @(x,k,lambda) (k./lambda).*(x./lambda).^(k-1).*exp(-(x/lambda).^k);
% pdf parameters
nG = 8; nW = 8;
mu = 1; sigma = 0.5;
lambda = 1; k = 1.5;
pdftab = [repmat({@(x) Gausspdf(x,mu,sigma)}, [1, nG]) ...
repmat({@(x) WeiBullpdf(x,k,lambda)}, [1, nW])];
% Generate a test (known) C table
n = length(pdftab);
C = rand(1,n);
C = C/sum(C);
% joint probability function, computed at s*C
spdf_fun = @(s) prod(arrayfun(@(f,x) feval(f{1},x), pdftab, s*C));
% Discretize s, make sure the range has a significant distribution
% contribution by inspecting the plot
% The range depends on the parameters used to build pdftab and C
smin = 1;
smax = 25;
ns = 1000;
s = linspace(smin,smax,ns).';
smid = 0.5*(s(1:end-1)+s(2:end));
% plot see where is the distribution is significant
spdf = arrayfun(spdf_fun,smid);
plot(smid,spdf);
xlabel('s');
ylabel('spdf');
% Generate s from the discretized pdf
scdf = [0; cumsum(spdf)];
scdf = scdf / scdf(end);
[scdf,I] = unique(scdf);
s = interp1(scdf, s(I), rand(), 'pchip')
% Here is X vector
X = s*C
Bruno,
I'm tyring hard to understand this answer and have a few questions.
The first isn't a mathematical question (at least I don't think so): After defining s = linspace(...), why is it necessary to then immediately define smid. Why not define smid = linspace(...)?
Mathematically, it appears that a continuous random variable, S, has probability densitfy function spdf_fun. The variable spdf then takes on values of the pdf of S where the pdf is significant, as it appears in the plot. If spdf_fun is a pdf, then it must integrate to unity. But:
>> trapz(smid,spdf)
ans =
1.0248e-04
Am I misunderstanding something?
Bruno Luong
Bruno Luong el 10 de En. de 2021
Editada: Bruno Luong el 10 de En. de 2021
The s is the "edges" or partition of the interval and smid are the points in the intervals. This is related to respectively x and t of the Riemann's sum. Later I use s for interpolation so I need both s and smid.
spdf_fun must integrate to 1 on R^16. But for the conditional probabiliy, when integrated on the rectrict set { M*X = 0 } that is a subset of manifold of dimension 1, there is no reason it gives 1. You are correct spdf_fun returns an unormalized pdf with respect so s parameter.
However I take care of the renormization by the statement in the code
scdf = scdf / scdf(end);
That statement takes care of the time-steps and scaling factor so as to force the Riemann's sum (==scdt(end)) to be 1.
Paul
Paul el 10 de En. de 2021
How can spdf_fun integrate to 1 on R^16? Isn't it ony defined on R^1?
Maybe I'm confused about what spdf_fun is? Is it a probabability density function? If so, is it a conditional density? What are the random variables that it defines?
Refering to the this statement: "But for the conditional probability, when integrated ...." Can you please clarify? Conditional probabability of what event conditioned on what other event? A conditional problability is a number, not a function to be integrated. If it should read as "But for the conditional probability [density function], when integrated ...." , then shouldn't it have to integrate to unity as must be the case for any probability density function?
Bruno Luong
Bruno Luong el 10 de En. de 2021
Editada: Bruno Luong el 10 de En. de 2021
Sorry I use words a little bit lossly
if X vector of same size as pdftab, considered as vector in R^16, then the integral of this function is 1.
@(X) prod(arrayfun(@(f,x) feval(f{1},x), pdftab, X));
spdf_fun = @(s) ...
is a pdf defined on a manifold of dimension 1 (in fact a line parametrized by s in R^16).
"conditional probability", again I use words a little bit lossly, please read this
and related topic.
The "condition" here is M*X = 0, we need to evaluate the probability on the set { M*X = 0 }. Corresponding to this Wiki page spdf_fun returns the numerator (unormaized pdf), the normalization in the denominator is such that the integral on the subset (conditioning probablity distribution) is equal to 1.
Paul
Paul el 10 de En. de 2021
Ok. I think I see what you're doing. I think it would be clearer to normalize spdf rather than scdf, but I guess at the end of the day the same X will result.
So I think the process is going like this: define the joint pdf of the Xi. Then take the "slice" out of the joint PDF that lies along the line M*X = 0 as parameterized by s. That slice, after normalization, defines a condtional pdf, from which samples of S are generated (which is the variable s, in s = interp1 ...), and then finally generate samples of X from S (X = S.*C). As you noted, the answer as posted doesn't normalize the slice but rather normalzes the CDF.
After thinking more about your procedure, I realized that my answer, as posted, might actually be a solution the problem (see my comment above). However, I compared your result to mine for the parameters in my problem (2 normal, 2 Weibull)and different parameters for the RVs), and came up with different distributions for the columns of X. So I'm still confused (and it could have been a mistake on my part in implementing your code). Any idea what's different about my answer compared to yours?
Is spdf_fun (after normalizatoin) supposed to be the same as f_y1giveny2y3y4 in my answer? They look like they have simular shapes but the peak of the former is to the left of the peak of the latter, and I did notice that spdf_fun can take on negative values for s < 0.
When the slice is taken out of joint pdf, does it account for the fact that the joint pdf of the Xi doesn't have support over the entirety of R^n (because the Weibull variables only have support for x > 0)?
Bruno Luong
Bruno Luong el 11 de En. de 2021
Editada: Bruno Luong el 11 de En. de 2021
"I did notice that spdf_fun can take on negative values for s < 0"
This is a good observation. Indeed it can BUT if I would select s < 0, then I have to introduce the fact that the WeiBullpdf(x) expression is only define for x >= 0, and 0 if x < 0 (the support is x >= 0 as you correctly stated).
With such extended "definition" of WeiBullpdf, spdf_fun(s) would return 0 for s < 0.
I avoid all those cumbersome conditions in my code simply by selecting
smin = 1; % >= 0
I did not spend time to read yet your answer carefully. I might do if I have some time.
(PS: It's a rare time where I get so many detailed questions for a code, but the real intention when I wrote this code is that to avoid having too much trouble to explain it.)
Bruno Luong
Bruno Luong el 11 de En. de 2021
Editada: Bruno Luong el 11 de En. de 2021
I enclose here my code with the same input (accepted normalization of weights) as Paul, and the conditioning distribution of s=sum(X).
Bruno Luong
Bruno Luong el 11 de En. de 2021
Editada: Bruno Luong el 11 de En. de 2021
"Is spdf_fun (after normalizatoin) supposed to be the same as f_y1giveny2y3y4 in my answer?"
I read your code and the answer is no.
The reason is the underlined metric for "acceptance" and the limits when tolerance converges to 0.
  • In your case it's the inf-norm on the transformed variables [Y2,Y3,Y4].
  • In my case it's the 2-norm on X projected on orthogonal of span(C).
Granted the choice to reparametrize the problem is somewhat empirical, and it affects the pdf of the hyper variable (s). In my case I decide to keep the origin variables X along the conditioning set, in you case it's [Y2,Y2,Y4] (I guess if you decide setect another tripplet the result change).
Here is the result (correspond mfile attached) when I plot the histogram using my metric on top of pdf(s).
Conrado Neto
Conrado Neto el 11 de En. de 2021
Ok, I think it all makes sense now, although some steps are a bit too complex for me
for sake of solving my problem, I only have 2 questions for now:
1) s = interp1(scdf, s(I), rand(), 'pchip')
why do you have a rand(), there?
2) In the end, although you have the pdf of the (conditional) s, you only generate one sample of each Xi. To me, it would sound like after you have the conditional s, you should be able to generate as many samples of each xi as you need, is that a wrong statement?
Thanks a lot Bruno and Paul for helping me with this problem
Bruno Luong
Bruno Luong el 11 de En. de 2021
Editada: Bruno Luong el 11 de En. de 2021
  1. s = interp1(scdf, s(I), rand(), 'pchip'): Because rand() - uniform distribution - is required to generate pdf, which is the derivative of the cdf.
  2. If you want to generate a bunch of X for example 1000, then just do
s = interp1(scdf, s(I), rand(1000,1), 'pchip');
X = s * C; % 1000 x 16, each row is a random realization
Bruno Luong
Bruno Luong el 11 de En. de 2021
Editada: Bruno Luong el 11 de En. de 2021
There was a BUG in my script with histogram (I overwrite s), here is a correct one
Gausspdf = @(x,mu,sigma) 1./(sigma*sqrt(2*pi))*exp(-(x-mu).^2./(2*sigma.^2));
WeiBullpdf = @(x,k,lambda) (k./lambda).*(x./lambda).^(k-1).*exp(-(x/lambda).^k);
% pdf parameters
nG = 8;
mu = rand(1,nG); sigma = 1+rand(1,nG);
nW = 8;
k = 1+rand(1,nW); lambda=1+rand(1,nW);
extendfun = @(p,n) p + zeros(1,n);
mu = extendfun(mu,nG);
sigma = extendfun(sigma,nG);
k = extendfun(k,nW);
lambda = extendfun(lambda,nW);
pdftab = [arrayfun(@(mu,sigma) @(x) Gausspdf(x,mu,sigma), mu, sigma, 'Unif', 0) ...
arrayfun(@(k,lambda) @(x) WeiBullpdf(x,k,lambda), k, lambda, 'Unif', 0) ...
];
% Generate a test (known) C table
n = length(pdftab);
C = rand(1,n);
%C = [1 0.3 0.5 0.7];
C = C/sum(C);
% joint probability function
spdf_fun = @(s) prod(arrayfun(@(f,x) feval(f{1},x), pdftab, s*C));
% Discretize s, make sure the range has a significant distribution
% contribution with the plot
% The range depends on the parameters used to build pdftab
smin = 0;
smax = sqrt(n)*8;
ns = 1000;
s = linspace(smin,smax,ns).';
smid = 0.5*(s(1:end-1)+s(2:end));
% plot see where is the distribution is significant
spdf = arrayfun(spdf_fun,smid);
% Normalize, just for the plot
ds = mean(diff(s));
a = 1/(sum(spdf)*ds);
spdf = a*spdf;
plot(smid,spdf);
hold on
PlotHist = n <= 4; % For large n, it converg too slow
if PlotHist
% Check with histogram
nbins = 32;
m = 1e7; % number of sample
X = zeros(m,n);
sz = [1 m];
for k=1:n
distk = pdftab{k};
info = functions(distk).workspace{1};
funname = func2str(distk);
if contains(funname, 'Gausspdf')
X(:,k) = GaussianRNG(info.mu, info.sigma, sz);
elseif contains(funname, 'WeiBullpdf')
X(:,k) = WeibullRNG(info.k, info.lambda, sz);
end
end
acceptance_ratio = 1e-3;
sumX = sum(X,2); % EDIT bug correction
P = null(C); % orthogonal of span C
XL2Residue = sum((X*P).^2,2);
[~,accepted] = mink(XL2Residue, ceil(m*acceptance_ratio));
histogram(sumX(accepted), 32, 'Normalization', 'pdf'); % EDIT bug correction
end
xlabel('s = \Sigma X_i');
ylabel('pdf(s)');
grid on
% Generate s from the discretized pdf
scdf = [0; cumsum(spdf)];
scdf = scdf / scdf(end);
[scdf,I] = unique(scdf);
s = interp1(scdf, s(I), rand(1000,1), 'pchip');
% Here is X vector
X = s*C
%%
function x=WeibullRNG(k,lambda,sz)
x = lambda.*( -log(1-rand(sz))).^(1/k);
end
%%
function x=GaussianRNG(mu,sigma,sz)
x = mu + sigma*randn(sz);
end
Conrado Neto
Conrado Neto el 12 de En. de 2021
Awesome! I think I got it now!
Conrado Neto
Conrado Neto el 12 de En. de 2021
I am trying to accept your answer but I am getting the following message:
An Error Occurred
Unable to complete the action because of changes made to the page. Reload the page to see its updated state.
if I refresh the page (F5), the error persists. I will try again soon!
Conrado Neto
Conrado Neto el 12 de En. de 2021
from checking the code, I believe the following part is not being used, right?
extendfun = @(p,n) p + zeros(1,n);
mu = extendfun(mu,nG);
sigma = extendfun(sigma,nG);
k = extendfun(k,nW);
lambda = extendfun(lambda,nW);
Conrado Neto
Conrado Neto el 12 de En. de 2021
In addition, how does this part of the code work?
PlotHist = n <= 4; % For large n, it converg too slow
if PlotHist
[...]
end
what is n? if I define n=1 or 2 or 3 or 4 I get an error
if I dont define n, I get an error too
thanks,
Bruno Luong
Bruno Luong el 12 de En. de 2021
Editada: Bruno Luong el 12 de En. de 2021
The function
extendfun = @(p,n) p + zeros(1,n);
and subceeding commands are used when mu or sigma or k or lambda is/are initialzed as scalar (if those parameters are common for Xi distributions). If it's not the case you can remove it.
From my code
n = length(pdftab) % 16
Conrado Neto
Conrado Neto el 12 de En. de 2021
thanks a lot Bruno
I did observe now that your last version of the file gives these results and it looks a bit off to me: results
Bruno Luong
Bruno Luong el 12 de En. de 2021
Editada: Bruno Luong el 12 de En. de 2021
For large number of variable (n=16), you might need much larger value of m (sample) and reduce the acceptance_ratio = 1e-4 or 1e-5 or even smaller in order for the histogram to converges propertly to the theorical pdf, since it is very rare to be close to M*X=0.
m = 1e8; % number of samples
...
acceptance_ratio = 1e-5;
Ideally acceptance_ratio should reduce exponentially with n.
You also need to input C more coherent with random law.
Eventually it converges only when acceptance_ratio -> 0. But for "large" n, it's more chalenging to do proper statistics on those rare events.
Here is my test with the above parameters.
Conrado Neto
Conrado Neto el 12 de En. de 2021
I see, I understand!
thanks a lot Bruno, all makes sense now
Paul
Paul el 17 de En. de 2021
Bruno,
Does this solution extend to the case where we remove the restrictions that Ci > 0 for the ratios involving the normal RVs?
For example, if X1 and X2 are normal and X3 and X4 are Weibull, does the same solution apply if:
C = [ -0.2 0.3 0.4 0.5]
Bruno Luong
Bruno Luong el 17 de En. de 2021
I think so. In the logic we never use the fact that C >= 0.
The parametrization migh extend to an unbounded interval. Numerically we just need to be careful and truncate at the places where the pdf is considered as neglegible.

Iniciar sesión para comentar.

Más respuestas (2)

Jeff Miller
Jeff Miller el 3 de En. de 2021
Editada: Jeff Miller el 3 de En. de 2021

1 voto

It sounds like the only "free" aspect of your new samples is the new total (say, 241 instead of 240). Once you have that new total, your 16 individual random scores are in the pre-determined proportions of that new total.
So, generate new random totals in the same way as the original--i.e., as the sum of 16 separate random scores--and then compute the 16 individual new random variables as the pre-determined proportions of the new total.
Note that the 16 random variables generated in this way will no longer be normals and weibulls, however, since they are fixed proportions of a (total) random variable that is neither normal nor weibull.
By the way, you can speed up the generation of the total of 8 independent normal random variables, since their total is a single normal with the sum of the means and the sum of the variances.

12 comentarios

Conrado Neto
Conrado Neto el 3 de En. de 2021
Thanks a lot for replying my question, before I accept your answer do you mind clarifying please:
so I can generate new totals and for each total I can calculate the values of each random variable based on their proportions?
isnt it annacurate since the totals are generated using values that are not based on those proportions?
this might be an obvious question to people that understand the topic, but I am not an expert, so forgive me if it is a simple question
any further clarification is appreciated!
Well, I don't really understand why you are trying to generate random numbers with these weights, so I don't think I can clarify much.
As I understood your problem, the first number should always be exactly 5/240 of the total, the second should always be exactly 25/240 of the total, and so on. With that constraint, these numbers are highly dependent--for example, the second is always exactly 5 times the first, and so on. In fact, to make a new sample, another way to proceed is to just generate the first one randomly (e.g., from a normal), and then compute each of the others from that. E.g.,
w = (5/240, 25/240, 12/240, and so on);
x(1) = randn; % with whatever mu and sigma you want
x(2) = w(2)/w(1) * x(1);
x(3) = w(3)/w(1) * x(1);
% etc
In this case the distributions of all x's would be normal, however.
The point is that there really isn't much random here if you regard these weights as fixed. You just can't independently generate separate random values using the original proportions.
Conrado Neto
Conrado Neto el 3 de En. de 2021
the issue here is that I dont think that, conditional on those proportions, it is accurate to generate samples of the total first. because the total should change when you enforce those proportions condition.
Conrado Neto
Conrado Neto el 3 de En. de 2021
ok, let me try to be more clear
Conrado Neto
Conrado Neto el 3 de En. de 2021
I believe the problem can be solved irrespective of the application.
lets assume I have two random variables, x1 and x2, they follow different distributions and, at first, are independent of each other.
if I was to rephrase my problem, and say I want to generate samples of x1+x2 conditional on that x1/(x1+x2) must always equal 0.6 (or x2/(x1+x2)=0.4)
if I generate samples of x1 and than do x1/(x1+x2)=0.6 to obtain x2, what happens to the distribution type mean and std dev of x2? it doesnt matter? It should matter, right?
so this problem should be addressed the same way as the problem I proposed originally, I believe
thanks a lot for taking time to think of my problem, I really appreciate it
Conrado Neto
Conrado Neto el 3 de En. de 2021
if I do what you originally proposed: "generate new random totals in the same way as the original--i.e., as the sum of 16 separate random scores--and then compute the 16 individual new random variables as the pre-determined proportions of the new total."
then the total sum distribution, considering the proportions, will be equal as the total distribution, without consideration of the proportions condition, which should not be the case..
thanks for the suggestion, I'd appreciate it if you have any other ideas
Jeff Miller
Jeff Miller el 3 de En. de 2021
Editada: Jeff Miller el 4 de En. de 2021
Maybe it is helpful to look in more detail at the 2-variable case to understand the problem a bit better:
% Just generate some independent random numbers & compute sums.
simN = 500000;
expMu = 1;
expvals = exprnd(expMu,simN,1); % random variable 1 is exponential with mean 1
norMu = 1;
norSigma = 1;
norvals = normrnd(norMu,norSigma,simN,1); % rv 2 is normal(1,1)
sums = expvals + norvals;
% Let's say we are interested in simulating pairs of the RVs
% where the exponential is 0.5 of the total. We can look at suche
% pairs "retrospectively" by picking out relevant pairs from the
% independent simulations above.
targetExpFrac = 0.5;
tolerance = 0.01; % because the proportion won't be exactly 1/2
ExpFrac = expvals ./ sums; % compute actual exponential fraction in each pair.
% Now find the pairs that have the right exponential fraction:
accepted = (ExpFrac >= targetExpFrac - tolerance) & (ExpFrac <= targetExpFrac + tolerance);
% Let's look at the distributions of all scores versus the score
% where the proportions are as desired:
figure;
subplot(3,2,1);
histogram(expvals);
title('All exponentials');
subplot(3,2,2);
histogram(expvals(accepted));
title('Exponentials in samples with desired fraction');
subplot(3,2,3);
histogram(norvals);
title('All normals');
subplot(3,2,4);
histogram(norvals(accepted));
title('Normals in samples with desired fraction');
subplot(3,2,5);
histogram(sums);
title('All sums');
subplot(3,2,6);
histogram(sums(accepted));
title('Sums in samples with desired fraction');
% Now, how can we sample randomly from pairs with the right proportions:
acceptedexpvals = expvals(accepted);
acceptednorvals = norvals(accepted);
% sample one position randomly and take the exp & norm from that position.
As you can see, the distributions of exponential in the samples with the desired fraction is quite different from the distribution of all exponentials. And likewise for the normals and for the sums. So, you are right to worry that the constraint changes the distribution of totals.
Unfortunately, I don't see any short-cuts. In your case, you would have to generate many millions of sets of 16 variables independently, and find that very small fraction of them which happened to have just the proportions you wanted across all rvs.
Conrado Neto
Conrado Neto el 4 de En. de 2021
Exactly, this problem describe what I want!
I am just wondering now the computational time to get enough samples that have the proportions I want.
you did really understand my problem, I wish there was a computationally efficient way to solve it
thanks a lot
Jeff Miller
Jeff Miller el 4 de En. de 2021
I guess you want to generate these random numbers for the purposes of running additional simulations. If so, you might check the effect of the tolerance size. With larger tolerances, it will be faster to generate enough samples, and you might find that the simulation results don't depend much on the tolerance size.
Conrado Neto
Conrado Neto el 4 de En. de 2021
that is correct, I need those samples for additional simulations, therefore I need many samples
I will investigate whether using this method I can get the results in a reasonable amount of time
I will check the effect of a higher tolarance on the results
thanks a lot
if you happen to think of another solution, please let me know too
Jeff Miller
Jeff Miller el 4 de En. de 2021
I am pretty sure that there is an easier way to do this. The key is that once you know the value of one rv, you can compute the values of the other rvs from the known weights. So, you only need to generate one rv randomly.
Let's say you want to generate random values of the first normal rv, from which you can compute the rest. As you have pointed out, the conditional distribution of this first rv is not actually normal because of the weighting constraint. What is its distribution?
To simplify the terminology, I'll pretend the rvs are all discrete so I can talk about probabilities.
As an example, what is pr(rv1=5), given the constraint? Well, rv1=5 under the constraint only when it is also the case that rv2...rv16 all equal their constrained values consistent with rv1=5. Call those values rv2c, rv3c,....
Now all these rvs are independent so the joint probability is the product of their individual probabilities. Thus, pr(rv1=5|constraint) = pr(rv1=5)*pr(rv2=rv2c)*pr(rv3=rv3c)...
In principle you can do the analogous calculation for all of the different values of rv1, and thus get the distribution of rv1 under the constraint. Once you have that, you can randomly sample rv1's from the constrained distribution and compute the corresponding values of the other rvs from each random rv1.
I must admit that I am not confident about how to extend all this to continuous rvs. Maybe it is enough to compute a joint likelihood by multiplying pdfs and applying some normalization at the end, but maybe it is trickier than that.
Conrado Neto
Conrado Neto el 6 de En. de 2021
Indeed, I agree that knowing that one "conditional distribution" of a single rv would solve all my problems.
and I also dont know how to obtain that and I was hoping someone might know
in any case, you are right
if you happen to have any idea, let me know
I did some search on this and the topic started to get really complicated so I wasnt able to fully get it

Iniciar sesión para comentar.

Paul
Paul el 8 de En. de 2021
Editada: Paul el 13 de En. de 2021
Based on this comment, the problem is as follows:
Let Xi be a set of independent, continuous random variables, i = 1-n.
Let wi be a set of weights, i = 2-n. The goal is find samples of Xi under the constraints Xi/sum(Xi) = wi, i = 2-n.
To solve this problem:
1. transform to a different set of random varibles defined as follows: Y1 = sum(Xi), Yi = Xi/sum(Xi) i = 2-n
2. find the joint pdf of Yi
3. Find the conditional pdf of Y1 given Yi = wi, i = 2-n.
4. sample Y1 from its conditional pdf
5. generate samples of Xi from Y1 and the constraint equations.
For example assume:
X1 ~ N(0,1)
X2 ~ N(1,2)
X3 ~ W(1,1)
X4 ~ W(2,1)
% Step 0
% define the distribution parameters
mu1 = 0; sigma1 = 1;
mu2 = 1; sigma2 = 2;
a3 = 1; b3 = 1;
a4 = 1; b4 = 2;
% define the pdfs of the random variables
syms x1 x2 real
syms x3 x4 real
f_x1(x1) = 1/sigma1/sqrt(2*pi)*exp(-(x1-mu1)^2/(2*sigma1^2)); % -inf < x1 < inf
f_x2(x2) = 1/sigma2/sqrt(2*pi)*exp(-(x2-mu2)^2/(2*sigma2^2)); % -inf < x2 < inf
f_x3(x3) = piecewise(x3 <= 0, 0, x3 > 0, (b3/a3)*((x3/a3)^(b3-1))*exp(-((x3/a3)^b3))); % -inf < x3 < inf
f_x4(x4) = piecewise(x4 <= 0, 0, x4 > 0, (b4/a4)*((x4/a4)^(b4-1))*exp(-((x4/a4)^b4))); % -inf < x4 < inf
% define the joint pdf of Xi
f_x1x2x3x4(x1,x2,x3,x4) = f_x1(x1)*f_x2(x2)*f_x3(x3)*f_x4(x4);
% Step 1
% define the transformation of RVs
syms y1 y2 y3 y4 real
g1 = y1 == x1 + x2 + x3 + x4;
g2 = y2 == x2/(x1 + x2 + x3 + x4);
g3 = y3 == x3/(x1 + x2 + x3 + x4);
g4 = y4 == x4/(x1 + x2 + x3 + x4);
% Step 2
% solve for the inverse equations
h = solve([g1,g2,g3,g4],x1,x2,x3,x4,'Real',true,'ReturnConditions',true);
% find the Jacobian of the inverse solution
J(y1,y2,y3,y4) = jacobian([h.x1, h.x2, h.x3, h.x4],[y1, y2, y3, y4]);
% determinant of J
detJ(y1,y2,y3,y4) = det(J(y1,y2,y3,y4));
% joint density of Yi (edit 12 Jan 2021, math should be abs(detJ). Did not change results.
f_y1y2y3y4(y1,y2,y3,y4) = f_x1x2x3x4(h.x1,h.x2,h.x3,h.x4)*abs(detJ(y1,y2,y3,y4));
% Step 3
% The conditional density of Y1 given Yi = yi is the joint density divided by the marginal density
% marginal density of Yi, i = 2-4
f_y2y3y4(y2,y3,y4) = int(f_y1y2y3y4(y1,y2,y3,y4),y1,-inf,inf);
% conditional density of Y1
f_y1giveny2y3y4(y1,y2,y3,y4) = f_y1y2y3y4(y1,y2,y3,y4)/f_y2y3y4(y2,y3,y4);
% verify conditional density using Jeff Miller's acceptance approach (https://www.mathworks.com/matlabcentral/answers/707693-generation-of-conditional-random-variables#comment_1242578)
simN = 5000000;
X1 = random('Normal',mu1,sigma1,[simN 1]);
X2 = random('Normal',mu2,sigma2,[simN 1]);
X3 = random('Weibull',a3,b3,[simN 1]);
X4 = random('Weibull',a4,b4,[simN 1]);
w2 = 0.3;
w3 = 0.5;
w4 = 0.7;
Y1 = X1 + X2 + X3 + X4;
Y2 = X2./(X1 + X2 + X3 + X4);
Y3 = X3./(X1 + X2 + X3 + X4);
Y4 = X4./(X1 + X2 + X3 + X4);
tolerance = 0.1;
accepted = abs(Y2 - w2) < tolerance & abs(Y3 - w3) < tolerance & abs(Y4 - w4) < tolerance;
figure;
subplot(311);
histogram(Y1(accepted),'Normalization','pdf');
hold on
fplot(f_y1giveny2y3y4(y1,w2,w3,w4),[0 5],'r','LineWidth',4)
title('f(y1 | y2,y3,y4), Analytic and Histogram (Acceptance)')
% Step 4
% sample Y1 using inverse sampling (numerical solution)
syms z real
f(z) = vpa(f_y1giveny2y3y4(z,w2,w3,w4));
Cdf_y1(y1) = int(f(z),-inf,y1);
y1vec = 0:.001:5; % Cdf_y1(0) = 0, Cdf_y1(5) = 0.999998034
Cvec = double(Cdf_y1(y1vec));
usamp = rand(simN,1);
Y1samp = interp1(Cvec,y1vec,usamp);
Y2samp = w2;
Y3samp = w3;
Y4samp = w4;
subplot(312)
histogram(Y1samp,'Normalization','pdf');
hold on
fplot(f_y1giveny2y3y4(y1,w2,w3,w4),[0 5],'r','LineWidth',4)
title('f(y1 | y2,y3,y4), Analytic and Histogram (Direct Inverse Sampling)')
% Step 5
% generate samples of Xi from Yi and the constraint equations
h1 = matlabFunction(h.x1);
X1samp = h1(Y1samp,Y2samp,Y3samp,Y4samp);
X2samp = w2*Y1samp;
X3samp = w3*Y1samp;
X4samp = w4*Y1samp;
subplot(313)
histogram(X1samp,'Normalization','pdf');
title('Conditional PDF of X1');
Here are the results. Seemed to work pretty well for this example. Note that Step 5 runs very fast once Steps 1-4 complete.
I don't know how well this approach scales as the number of RVs increases. There may be other gotcha's as well.

18 comentarios

Conrado Neto
Conrado Neto el 8 de En. de 2021
Hi Paul
amazing answer, that seems to be exactly what i want to do
I am going to play with your code a bit and then I will have to change my accepted answer to yours because it seems you nailed it
In my case there are 16 RVs in total. I am also unsure if there would be other gotcha's because of that but I dont think so
I have a couple other questions, the first one is a very basic one:
In my case I want the sum of the weights to be 1, so I am not sure why you wrote:
g1 = y1 == x1 + x2 + x3 + x4;
I believe it should be
g1 = y1 == x1/(x1 + x2 + x3 + x4);
like the other RVs. Unless there is a basic logic operation there that I am missing.
In any case, the problem shouldnt have its formulation changed much considering that that is different.
Now the second one question, since you clearly understand my problem.
In the end, I just need to generate the samples of 'sum(x1+x2+x3+x4+..+x16)' respecting: 'x1/(x1+..+x16)+x2/(x1+..+x16)+...+x16/(x1+..+x16)=1'
Does this simplify the problem or is it still the best approach to generate the variables for each RV and then sum them?
thanks a lot you already helped me a lot with your answer
Paul
Paul el 8 de En. de 2021
I must not have understood the problem, insofar as the problem has nothing to do with known weights at all, as I had in my answer with w2-w4 (and Jeff Milller had in his answer as well with targetExpFrac).
But the new problem you pose is problematic because
X1/(X1+ ... +X16) + X2/(X1 + ... + X16) + .... + X16/(X1 + .... + X16) = 1
is satisfied for any values of X1 ... X16, isn't it?
Conrado Neto
Conrado Neto el 9 de En. de 2021
Exactly, I need to generate values of X1,..,X16 that always satisfies X1/(X1+ ... +X16) + X2/(X1 + ... + X16) + .... + X16/(X1 + .... + X16) = 1
because, in the end, I need values of X1+..+X16
Conrado Neto
Conrado Neto el 9 de En. de 2021
I dont understand why can't I just modify your code so that
g1 = y1 == x1/(x1 + x2 + x3 + x4);
which part would not work?
thanks a lot for your help
Paul
Paul el 9 de En. de 2021
Any values of X1 - X16 (determined in any way that you can imagine) will always satisfy that equation, so I don't see how that equation can influence anything.
So it's still not clear to me exactly what the problem is that needs to be solved and so I can't comment on the proposal to change the definition of g1. All I know for sure is that there are 16 independent RV's, eight are normal and eight are Weibull. Other than that, it's not clear at all what contraints are to be imposed on those RVs. What happend to the "weights"? For example, refering back to your comment here, where does the 0.6 come from? Is that specified as part of the problem statment?
Conrado Neto
Conrado Neto el 9 de En. de 2021
oh right, I am sorry for not mentioning that important aspect of my problem,
in my problem I call of these equations as weights:
X1/(X1+ ... +X16), X2/(X1 + ... + X16) , .... , X16/(X1 + .... + X16)
and there is the simplification, in my problem, that they are known prior to the generation of the random variables, for example:
X1/(X1+ ... +X16)=0.1 X2/(X1 + ... + X16)=0.05 .... , X16/(X1 + .... + X16)=0.01
and the sum of these weights is always 1.
so now I guess it makes more sense that your previously proposed code DOES WORK to solve my problem, isnt that right?
let me know if thats clear
thanks once again
Paul
Paul el 9 de En. de 2021
Are there any other constraints on the weights besides that they sum to 1? Are they all strictly positive?
Conrado Neto
Conrado Neto el 9 de En. de 2021
only that constraint, and those are all known prior to the generation of the random variables.
so essentially my problem is: I want to generate samples of (X1+...+X16), considering that X1,..,X16 are random variables (1 to 8 are normal and 9 to 16 are weibull) and respecting the constraints:
X1/(X1+ ... +X16)=C1
...
X16/(X1 + ... + X16)=C16
where C1,...C16 are NOT random variables and are known prior to the generation of the RVs.
and C1+...+C16 is always 1 (and C1,...,C16 are always positive numbers)
Conrado Neto
Conrado Neto el 9 de En. de 2021
I am trying to understand why the condition of x1/(x1+x2+x3+x4) was not included in the original solution you provided, I assume it was because that was what Jeff's comment said
Conrado Neto
Conrado Neto el 9 de En. de 2021
I am trying to modify your code to include the condition x1/(x1+x2+x3+x4)=C1 but I had no success in doing so.
Paul
Paul el 10 de En. de 2021
Editada: Paul el 10 de En. de 2021
I'm not yet convinced that the problem is well-posed. What we have are RVs Xi. As I showed above, the joint density of the Xi exists. It's just the product of the individual densities due to the independence of the Xi. Then we have RVs Yi defined as:
Yi = Xi/sum(Xi).
and finally we have an RV defined as:
S = sum(Xi)
What you're looking to do is generate samples of S conditioned on Yi taking on specified values. Those samples would come from the conditional density of S, i.e., the density of S conditioned on the Yi = Ci.
What is that conditional desnsity of S?
It is exists it would be the joint density of S and Yi divided by the marginal density of Yi. So to solve the problem we need to find these densities.
What's the marginal density of Yi? If it exists, it's just the joint density of Yi as derived from the joint density of Xi. At first glance, one might think to use the change of variables approach as done above to establish the joint density of Yi. But there's a catch. I'm nearly certain that a necessary condition for the joint density of Yi to exist is that the mapping from the Xi to the Yi be one to one. In this situation that's not the case, since Yi = 1/n for any x where Xi = x. So I don't think the joint density of Yi is defined.
More problems ensue in trying to find the joint density of S and Yi.
So if (and I'm not sure it's true) these joint densities do not exist, is the problem well posed?
Conrado Neto
Conrado Neto el 10 de En. de 2021
I am confused when you talk about the statistics of Yi since, in my problem, as you know, Yi=Ci, a constant value. So when you say: "marginal density of Yi", considering that Yi is constant, I dont understand it.
also, I am trying to understand why the problem you solved has no x1/(x1+x2+x3+x4) condition, and also then how is your problem different from my problem, is it only because that condition is not considered? why didnt you consider it in the first place?
I'm also confused as to which problem you solved in that example and how is it different from my problem.
I'd appreciate any clarification
thanks
Paul
Paul el 10 de En. de 2021
A. Before talking about Yi as a constant, I'm viewing Yi as RVs because they are defined as functions of the Xi. In other words, I'm viewing the problem in two steps. First define the RVs Yi, and then apply the conditions.
B. In my problem, I did not include Y1 = X1/(X1 + X2 + X3 + X4) as an RV because I misunderstood the problem. I thought the problelm only had n-1 constraints, when, in fact, it has n constraints.
C. The difference between the problem I solved and the real problem is that the problem I solved has only n-1 constraints; the real problem has n. Again, that was my misunderstanding of what you were asking.
I suspect the (or at least one) reason you're having trouble adapting my solution to your problem is related to the details about the mapping between the Xi and the Yi.
I have one other concern. Suppose the Yi = Xi/sum(Xi) are, in fact, continous random variables (it sure seems like they are; can Yi can take on an uncountable number of values before applying any constraints?). Then the probability that Yi = Ci for i = 1,n is zero, by the definiton of a continuous random variable.
Full disclosure: I think there are some subtleties in this problem that are getting close to or beyond my level of knowledge in this subject.
Paul
Paul el 10 de En. de 2021
Well, this is embarassing. After thinking more about the problem, I realized that the solution I posted, without modification, might solve your problem (at least in theory). If you run the code as is you'll see that X1samp./Y1 == 1 - (w2 + w3 + w4). and Xisamp./Y1 = wi, so I think Y1samp are the samples you seek. However, I'm not sure. I implemented my solution and Bruno's solution for the same parameters and did not get the same results. In both cases, the samples of Xi satisfied the contraints, but the Xi had different distributions between my solution and Bruno's. Could be user error on my part.
Jeff Miller
Jeff Miller el 10 de En. de 2021
FWIW, I implemented Paul's solution for a few further example distributions and found very good agreement between his Y1samp distribution and a subset of random samples selected as having the right weights (at least when tolerance was small).
Full disclosure: the subtleties in this problem passed my level of knowledge some time ago.
Bruno Luong
Bruno Luong el 11 de En. de 2021
@Paul: why the weight w1, w2, w3, w4 in your example do not sum to 1 ? Do it matter in the result?
Paul
Paul el 12 de En. de 2021
Editada: Paul el 3 de Feb. de 2021
They did not sum to 1 because when I wrote the answer I did not realize that was a constraint. However, I've changed the values to where the do sum to 1 and it didn't change the result.
Edit: Actually, they did sum to 1, with w1 implicitly being set to -0.5, because we must have sum(w) = 1.
Paul
Paul el 3 de Feb. de 2021
Editada: Paul el 3 de Feb. de 2021
Here is updated code that is simpler and more flexible.
mu = [0 1]; sigma = [1 2];
a = [1 1]; b = [1 2];
% define the constants.
C = [ 0.1; 0.2; 0.3; 0.4]; % column vector
nG = numel(mu);
nW = numel(a);
C = C(:)./sum(C);
%Ccell = num2cell(C);
% number of samples
simN = 5000000;
xvar = sym('x',[1 nG+nW],'real');
yvar = sym('y',[1 nG+nW],'real');
syms pdf_normal(x,muparam,sigmaparam)
pdf_normal(x,muparam,sigmaparam)= 1/sigmaparam/sqrt(2*pi)*exp(-(x-muparam)^2/(2*sigmaparam^2)); % -inf < x1 < inf
syms pdf_weibull(x,aparam,bparam)
pdf_weibull(x,aparam,bparam) = piecewise(x < 0, 0, x >= 0, (bparam/aparam)*((x/aparam)^(bparam-1))*exp(-((x/aparam)^bparam)));
% define the joint pdf of Xi
syms jointpdf_X real
jointpdf_X = 1;
for ii = 1:nG
jointpdf_X = jointpdf_X*pdf_normal(xvar(ii),mu(ii),sigma(ii));
end
for ii = 1:nW
jointpdf_X = jointpdf_X*pdf_weibull(xvar(ii+nG),a(ii),b(ii));
end
jointpdf_X = piecewise(sum(xvar)==0,0,jointpdf_X); % need to exclude the line sum(xi) = 0
jointpdf_X(xvar) = jointpdf_X;
% determinant of Jacobian
detJac(yvar) = yvar(1)^(nG+nW-1);
% joint density of Y, evaluated at Xi = Y1*Ci
Xi = num2cell(yvar(1)*C);
jointpdf_Y(yvar(1)) = simplify(jointpdf_X(Xi{:})*abs(detJac));
jointpdf_Y(yvar(1)) = vpa(jointpdf_Y);
% marginal density of Y2-Yn, evaluated at Xi = Y1*Ci, i = 2-n
marginalpdf_Y = vpaintegral(jointpdf_Y(yvar(1)),yvar(1),-inf,inf);
% conditional density of Y1 (only valid when the marginal density ~= 0)
conditionalpdf_Y1(yvar(1)) = jointpdf_Y/marginalpdf_Y;
conditionalpdf_Y1 = vpa(conditionalpdf_Y1);
% Step 4
% sample Y1 using inverse sampling (numerical solution)
syms z real
f(z) = vpa(conditionalpdf_Y1(z));
Cdf_y1(yvar(1)) = int(f(z),-inf,yvar(1));
% rough estimate of the maximum practical value of S = sum(Xi) (assumes S>0)
maxY1 = 0;
for ii = 1:nG
maxY1 = maxY1 + (mu(ii)+3*sigma(ii));
end
for ii = 1:nW
maxY1 = maxY1 + icdf('Weibull',0.999,a(ii),b(ii));
end
y1vec = 0:.01:maxY1;
% Cdfvec = double(Cdf_y1(y1vec));
Cdfvec = cumtrapz(y1vec,double(conditionalpdf_Y1(y1vec)));
[Cdfvec,ii] = unique(Cdfvec);
y1vec = y1vec(ii);
if abs(1-Cdfvec(end)) > 1e-5
error('CDF(end) ~= 1');
end
Y1 = interp1(Cdfvec,y1vec,rand(simN,1));
This code yields the same results as in the original answer. It works for the original question as defined with Ci > 0, and also works for other admissible values of Ci, i.e., sum(Ci) = 1 and the Ci associated with the Weibull variables all have to have the same sign, as long as the determination of maxY1 is properly modified for the case where Y1 = sum(Xi) is negative. But the important term is conditionalpdf_Y1, which I believe comes out correctly for the case where Y1 < 0 (admittedly having not tested this case extensively, but it seemed to work like it should).
It turns out that this solution is very similar to Bruno's solution in this comment. In fact, this term
jointpdf_X(Xi{:})
is exactly the same as this term
spdf_fun = @(s) prod(arrayfun(@(f,x) feval(f{1},x), pdftab, s*C));
in Bruno's code. The difference between the solutions is that this solution mutiplies jointpdf_X by the term
abs(detJac)) % == abs(y1^(nG+nW-1))
This extra term comes from the transformation of the differential volume in X space to the the differential volume in Y space, which is needed to define the joint pdf in Y space, which is needed to have before applying the conditioning to get the conditional pdf. At least in my understanding of these types of problems.
This solution can provide some insight into the symbolic form of the conditional pdf. If only a numerical answer is desired using this solution, it's probably easier to avoid the symbolic stuff and use Bruno's code but modfiy one line of code:
% spdf_fun = @(s) prod(arrayfun(@(f,x) feval(f{1},x), pdftab, s*C));
spdf_fun = @(s) prod(arrayfun(@(f,x) feval(f{1},x), pdftab, s*C)).*abs(s.^(nG+nW-1));
with the caveat that, as Bruno noted, Bruno's code will need minor updates to generalize to cases where one or more of Ci < 0 is allowed, should such generalization be desired.
The conditonal pdf in this solution will be proportional to
y1^(nG+nW-1+sum(b~=1))
which can be a pretty large exponent. For the orginal question it will be in the range [15 23] depending on the b parameters of the Weibull variables. So there may be some computational issues associated with raising numbers to a large power in intermeidate computations depending on the RV parameters if using only the numerical solution.

Iniciar sesión para comentar.

Preguntada:

el 3 de En. de 2021

Editada:

el 3 de Feb. de 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by