There are multiple ways to do it. First and easy way is to shift to deep neural network framework rather than using shallow neural network. It gives more fleibility to write custom function. Check out this example to create a deep network. Now coming to custom loss function so you can either define a custom output layer and include Huber loss for the same. Check out this example, this will help you. Or if you don't wish to create a separate layer then you can write a custom training loop and implement the loss function on your own. check this documentation.
If you still wants to use shallow neural network i.e. fitnet then implement the following steps.
net=fitnet([5 5], 'trainscg');
net.performFcn = 'newFcn';
net = train(net,x,t);
Now you have to create a new package "+newfcn" with following template.
1) newfcn.m - Same as mse.m
2) +newfcn/apply.m - The main performance calculation
- function perfs = apply(t,y,e,param)
- Calculate performance for each target individually so 'perfs' is same size as t, y and e.
4) +newfcn/backprop.m - Backpropagation derivatives
- function dy = backprop(t,y,e,param)
- Return 'dperf/dy', the derivative of performance with respect to each output 'y'.
5) +newfcn/forwardprop.m - Forward propagate derivatives
- function dperf = forwardprop(dy,t,y,e,param)
- Return 'dperf/dwb', given 'dy/dwb'. (wb - weights and biases)
In this way you have to define everything wrt custom function. Both the method which I suggested to similar but I would prefer the first one because it is easy to implement and you will find enough resources to explore. The second shallow network approach is slightly non trivial and there aren't many resources available. So try shifting to first approach.
I hope this helps.