It depends on what are you trying to achieve.
If you want to gain insights what parts of code have poor coverage, than you just need to have extra job that will run weekly and you can use output reports to identify places where coverage should be improved. I would say that is good and easy starting point.
If you want to have reasonable insights to coverage impact for every PR, just running coverage over all project and showing single number is not very useful outside of very small projects.
Once you have reasonably large codebase with hundreds of modules. Even large PR without any coverage would not shift needle much. So author won't feel much bad for not writing any tests at all.
Same applies for PR that is covered 100% again won't shift needle, so it's not rewarding for author of PR.
For that purpose i found best to present just coverage of modules that are affected by that PR. That way both will show much realistic number in module they affect.
If your goal is to enforce some minimal coverage, would advise rather educate and motivate people why writing unit tests is good for them. And it's very important also how unit tests are being written, Testing behavior not implementation, avoiding using mocks, testing mainly entry points that will test whole logic, faking just what is necessary network/os layers.
That way you have few tests, they does not need to be changed during refactoring's and usually bring high coverage without writing low value tests for every class/function.