This paper introduces an approach to, rather than the final results of, sustained research and development in the area of roboethics described herein. Encapsulated, the approach is to engineer ethically correct robots by giving them the capacity to reason over, rather than merely in, logical systems (where logical systems are used to formalize such things as ethical codes of conduct for warfighting robots). This is to be accomplished by taking seriously Piaget's position that sophisticated human thinking exceeds even abstract processes carried out in a logical system, and by exploiting category theory to render in rigorous form, suitable for mechanization, structure-preserving mappings that Bringsjord, an avowed Piagetian, sees to be central in rigorous and rational human ethical decision making.
We assume our readers to be at least somewhat familiar with elementary classical logic, but we review basic category theory and categorical treatment of deductive systems. Introductory coverage of the former subject can be found in Barwise and Etchemendy  and Ebbinghaus, Flum, and Thomas ; deeper coverage of the latter, offered from a suitably computational perspective, is provided in Barr and Wells . Additional references are of course provided in the course of this paper.
A category consists of a collection of objects and a collection of arrows, or morphisms. Associated with each arrow f are a domain (or source), denoted dom f, and a codomain (or target), denoted cod f.