Simplest and most efficient approach would be to compute samples r_i = r(i*T) where T is the sampling period and then use conv to approximate the convolution integral, i.e., Make sure to take enough samples r_i such that r(t) is equal to, or well appoximated by, zero outside the interval covered by the samples. This approach is simpler if r(t) = 0 for t < 0, but can still be implemented even if r(t) ~=0 for t < 0.
If q and/or r have lots of samples, the consider using cconv, which uses and FFT based approach that might be faster. If you're willing to assume that q(t) is linear between its samples, then you can use interp1 to interpolate to an effectively higher sampling period before sampling r(t) and doing the convolution. If you'd like, feel free to post an example and your code. The data in q can be saved in .mat file and uploaded using the Paperclip icon on the Insert menu, unless you can define q by an equation as an example. If you have R(s) as a sym, then you can include that in the .mat file as well. Otherwise, a picture (use the Image icon on the Insert menu) should be fine if R(s) isn't too complicated.