In this paper, we study the following nonlinear backward stochastic integral partial differential equation with jumps begin{equation*} left{ begin{split} -d V(t,x) =&displaystyleinf_{uin U}bigg{H(t,x,u, DV(t,x),D Phi(t,x), D^2 V(t,x),int_E left(mathcal I V(t,e,x,u)+Psi(t,x+g(t,e,x,u))right)l(t,e) u(de)) &+displaystyleint_{E}big[mathcal I V(t,e,x,u)-displaystyle (g(t, e,x,u), D V(t,x))big] u(d e)+int_{E}big[mathcal I Psi(t,e,x,u)big] u(d e)bigg}dt &-Phi(t,x)dW(t)-displaystyleint_{E} Psi (t, e,x)tildemu(d e,dt), V(T,x)=& h(x), end{split} right. end{equation*} where $tilde mu$ is a Poisson random martingale measure, $W$ is a Brownian motion, and $mathcal I$ is a non-local operator to be specified later. The function $H$ is a given random mapping, which arises from a corresponding non-Markovian optimal control problem. This equation appears as the stochastic Hamilton-Jacobi-Bellman equation, which characterizes the value function of the optimal control problem with a recursive utility cost functional. The solution to the equation is a predictable triplet of random fields $(V,Phi,Psi)$. We show that the value function, under some regularity assumptions, is the solution to the stochastic HJB equation; and a classical solution to this equation is the value function and gives the optimal control. With some additional assumptions on the coefficients, an existence and uniqueness result in the sense of Sobolev space is shown by recasting the backward stochastic partial integral differential equation with jumps as a backward stochastic evolution equation in Hilbert spaces with Poisson jumps.