Ethics

I will use this area to discuss a number of ethics-related topics. My main interest in ethics revolves around normative ethics or the idea of understanding how we should look at understanding what actions are right and wrong. I am working on ideas related to coming up with a sort of utilitarian ethics that will be relevant in ALL situations. This theory does NOT aim to tell you what to do in any one circumstance but it will tell you what to aim for and why this is what we should be aiming for.


Outline of the bottom line ethics theory.

1. Different ways of looking at ethics are useful when asking different questions. Both in normative ethics and outside of normative ethics

2. Bottom line
(in the end) the things we really care about are consequences. 

3. Better and worse consequences exist and can be looked at in a rational way that is not
relative to time and place.



1. Different ways of looking at ethics are useful when asking different questions.
 

Three of the most common (but not only) ways of looking at normative ethics include rule, character (virtue) and consequence (outcome) based theries.

Outside of normative ethics we have descriptive (just describing what people consider right or wrong) and applied (what we should do in a specific situation or type of situation). For the most part I spend my time thinking about normative ethics but often when people talk about ethics they are referring to descriptive or applied ethics so it is important to make this distinction.


Rule-based examples: 

  • Never steal
  • Never own slaves
  • 10 commandments

Under rule-based theories, we can ask the questions “do people follow these rules?” If people follow the rules they are doing what is right, if not they are doing what is wrong.



Character or virtue-based examples: 

  • bravery
  • loyalty
  • honesty

Under character based theories we ask, “do people have or encourage these or other virtues or characteristics. The right action is the one that the virtuous person would do in that specific situation.

 



Consequence-based examples:

  • Saving 5 lives is better than 1 life
  • 3 happy people are better than 1 happy and 2 sad.
  • greatest good for the greatest number

Under consequence based theories we ask who and how much are they impacted and for better or worse. We are judging right and wrong by the consequences of the actions, not whether they are following a rule or if they have a specific character trait but what is the outcome or is the outcome likely to be. (consequence based ethical therieres can have other best concesquences but utilitarian or greatest good is by far the most common)

Each of these three frameworks can be useful for answering different questions or different levels of understanding in normative ethics.

None of these frameworks can answer all questions about ethics but one of these is what we can ultimately ground ethics in.


2. Bottom line (in the end) what we really care about are consequences.

My argument here is that one of these three frameworks is what we actually care about and the others are ways we get to what we really care about. Ultimately what we care about are consequencesWe want to be around people who follow certain rules because these rules generally bring about better consequences. We actually care about having a society or being around others that have certain character traits because fostering these character traits tend to bring about better consequences.

Ultimately we care about the consequences, but that does not mean that looking at questions of right and wrong from a consequences perspective is the best way to get the better consequences.

Trying to think or teach others to make all ethical choices by looking at consequences instead of from rules, virtues or another point of view is not the same as saying that consequences are not what ultimately matter.



3. Better and worse consequences exist and can be looked at in a rational way that is not 
relative to the individual or society.

 

It is often asked “best consequence for who” or “who decides”?

My claim here is that a best consequence exists but may not be 100% knowable. We can make better and worse guesses at what “best” is regardless of one person or groups opinion or perspective.

The actions to get to the best consequences are not 100% knowable but I believe the concept of what is the “best” is knowable. The best consequence I am looking for must be defined in a way that is always true, now and forever, in every circumstance. That said, the best action at any one time will always be changing as the environment changes and our guess of what the best action to take will also change as we get new data about the world.

This is why specific rules or character traits may be useful at one point and not another but the “best” bottom line consequence never changes.

The “best” consequence is the one that would be preferred when taking into account ALL preferences or “preferred states” now and into the futer. 

I understand that this needs to be explained a bit more and that is something I am working on.



Why I care about ethics and specifically this aspect. 

First, I just think it is intrinsically interesting. I like to look at intuitions, heuristics or norms we have either biologically, culturally or both, and think about why these ideas came to be. Some are useful, some are not and some where useful at one time and now are not as important but the ideas stuck with us. As the world changes faster our evolved ideas on the world will often be less useful because they evolved in a different situation and enviornment. It is important to understand the bottom line or what we really care about when looking at something and ethics is no exception.

The second reason I became interested in this questions is because of the advancement in A.I. and the fact that we will need to give the post singularity A.I more than a bunch of rules to follow. We will need to give the general A.I. a goal that will work in all circumstances. If we give it the “wrong” goal it could be catastrophic. I am making no claims that “knowing” the right goal will make a difference as that will depend on how A.I. is developed and evolves but for a better out come it could be useful to know give it some general goal. If we do get to a place where computers keep improving their algorithms, increase data storage / access speed and get more data inputs such as sensors, Wikipedia or other data sources the world will be much different and our virtues and rules may not apply but we will still want better consequences. What that consequence is needs to be understood even if the actions to get these will be forever changing due to a new understanding of the world and an ever changing enviornment.

Advertisements