A method for solving multiobjective optimization problems where the objectives are given as expectations of random functions is proposed in this work. This method extends to the multiobjective case the classical stochastic gradient descent algorithm and therefore, on the contrary to classical approaches, it does not necessitate the costly estimation of the expectations obtained by using sample average approximation methods. The Stochastic Multiple Gradient Descent Algorithm (SMGDA) proposed here is based on the existence of a descent direction common to all the objectives. A recursive sequence is then built where at each step, the common descent vector is calculated from the updated objective function gradients. The step size is ruled by a sequence that has the same properties as the one in the classical stochastic gradient algorithm. Under a set of assumptions, both mean square and almost sure convergence can be proven. Illustrations of the method are given for different benchmark examples and compared to a more commonly used genetic algorithm (NSGA-II) coupled with a Monte Carlo estimator. The use of subgradients in the algorithm makes it able to converge towards the Pareto solutions even though the objectives are not differentiable. A final application under an engineering context will be provided, where a sandwich material is to be optimise under non-differentiable objectives and some geometrical constraints.