You are correct about False Discovery Rates being the same as False Positives. However, I'm not sure which metric you mean when you say True Negatives - for a multiclass problem, do you mean that every example that is not labelled as class 'A' and is not predicted as class 'A' would count as a TN for class A (and similarly for every class in your dataset)?
You may calculate this metric by subtracting from the total number of examples, all the TPs FPs and FNs to leave you with TNs - since TP+FP+TN+FN = Total No. of examples.
Alternatively, you may export the model trained in the app to your workspace and then get predictions on your test data. After that, you may use 'confusionmat' to obtain the confusion matrix as a matlab array from which you can calculate each of these 4 metrics manually.
Hope it helps!