Official response from Wiley

Response from Meghana Hemphill, Nov. 20, 2019

Dear Dr. Hagerty, Prof. Moreau, Dr. Poulson, and Mr. Rubinov,

Thank you for your patience during our review of your letter dated September 23, 2019. We take the points you’ve raised seriously; we have engaged the corresponding editors, the article’s authors based in China and Australia, and Wiley’s internal publication ethics team in our review and in writing this response. Below, we include responses to each of the points you raise in your September 23 letter to Wiley.

You are correct that all Wiley journals and journal editors are, by default, members of COPE. This is listed on the WIREs website, at http://wires.wiley.com/WileyCDA/Section/id-398153.html, and has been listed there for many years (i.e., not in response to this letter). The WIREs team and Wiley have followed COPE guidelines in reviewing and responding to the points you raise.

You are correct that the WIREs website section on COPE had mentioned the guidelines and linked to the now-retired CODE OF CONDUCT AND BEST PRACTICE GUIDELINES FOR JOURNAL EDITORS. Rest assured that we have been using current COPE flowcharts in specific scenarios that come up with our editors and have done so in this case as well. The WIREs link has now been updated.

We agree that the article in question (DOI: 10.1002/widm.1278) includes research involving humans.

We are aware of the persecution of the Uyghur communities, which has been condemned by the United Nations High Commissioner for Human Rights. However, this article is about a specific technology and not an application of that technology. It bridges artificial intelligence and physical anthropology, and contributes to this specific body of scientific literature.

As per the WIREs Data Mining and Knowledge Discovery editors, the university where this research took place has a diverse student body and does not discriminate against any minority groups. Therefore, the students who took part in this study were also diverse and representative of the minority populations in the student body. Uyghur students were not specifically recruited for the study. The consent form and university approval are attached and show that this study was intended as academic research.

Given WIREs Data Mining and Knowledge Discovery is predominantly a reviews journal, we did not have standard production practices in place to publish ethics statements related to human subject research with this article. As soon as we were alerted to this oversight, we double-checked with the editors and authors and added the statement on July 29, 2019. Since it did not change the content of the article of record, our understanding was that an erratum or corrigendum was not needed; however, in consultation with a Production Manager as part of our review of your letter, we learned that we were mistaken. We are now taking steps to correct this and will publish an erratum noting the change made to the article, and also update the article with an inline correction statement that has the date the ethics statement was added.

Based on our review of the case, the journal and editors acted in accordance with the ethical guidance that is available, and appropriate approvals were obtained from the university and the research subjects. As noted in response 5, a copy of the consent form and the university approval are included here.

We respectfully reiterate that this article is about a specific technology, not any application of that technology. Facial recognition technology has been designed using mostly white, Caucasian subjects (such as by Facebook and Google). The technology and the research behind that technology is enriched by expanding beyond white, Caucasian subjects. Some references on this topic: https://www.sciencedirect.com/science/article/abs/pii/S0969476519301146#bib2 and https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/.

After our internal ethics investigation, we stand behind the editors, authors, and peer reviewers.

We have carefully considered the statements made in your letter and, as noted above, we respectfully disagree that the journal, the authors, or Wiley acted in an unethical manner in publishing this article. We do understand that the rapid pace of developments in data mining and artificial intelligence technologies can be a double-edged sword, creating new opportunities and fields of inquiry in biometrics, advertising, and identifying missing people, while unquestionably raising ethical concerns that require the community to engage around new standards. This calls for a delicate balance of achieving the benefits while preventing unethical practices, to ensure we’re publishing sound science which meets strict requirements. As our Editor-in-Chief states in an upcoming editorial, “We regard comprehensive discussions on new, potentially controversial technologies and their ethical implications as a mission equally important as the dissemination of high-quality technical knowledge about mining data.”

Thank you for your concern and we very much welcome this feedback and engagement from the community.

Meghana