No. This is a system support task. You are monitoring a system that (supposedly) was good enough to implement so QA is done with it. If monitoring shows the system is degrading, it may be a matter of adding equipment, which is not QA's function or call. If monitoring shows the system wasn't good enough to deploy in the first place, you hope you said that and they just didn't listen. But even in that case, it is back to the drawing board, not another round of testing.
I think there's a middle ground here. Performance monitoring is almost always done by IT / production staff, not QA. On the other hand, QA teams could probably spend more time thinking about the production environment, especially when considering performance testing.
For example, soak tests need to be run over several days to see whether there are long-term load issues. Test data (and the database being used in the system) should not only reflect new pristine environment, but usage after 3, 6 or 12 months.
Additionally, QA knowledge can be vital in diagnosing problems that occur in production to help identify components that need further engineering.
Production teams might want to leverage QA automation scripts and expertise in their post-deployment monitors. Finding out more of IT's needs during the applicaton development process can help ensure better value for money in terms of tool and human capital investments, as well as increasing the quality of applications sent to production and the ability of IT teams to accurately monitor them once in place.
Finally, application development teams should go back to production to confirm whether the app's usage matches or significantly diverges from the assumptions made during testing. A different or chaning mix of tasks and user sessions can have a material effect on system performance.