With the rapid development in financial technology, algorithm trading has begun to replace human decisions. One such type of algorithm is reinforcement learning which takes trading actions by modeling historical data. In this work, we build a modular reinforcement learning system with the support of additional integrated predictive models to tackle stock trading. More specifically, we trained an LSTM model to predict the future stock market and integrate it into offline DQN (Deep Q-learning) reinforcement learning agents. We use common financial metrics such as annual rate of return, Sharpe ratio, and maximum drawdown rate to evaluate the performance of models and agents. Compared with naive holding and Day 30 baseline strategies, RL agents managed to achieve promising returns while constraining risk within an acceptable range. Though the performances of agents varied depending on the historical price fluctuation and time segments for prediction, reinforcement learning demonstrated promising advantages over human decisions.