Utilitarianism

Jump to Last Post 1-7 of 7 discussions (31 posts)
  1. profile image53
    AnalogousMethodposted 10 years ago

    This deserves its own thread.

    Josak, your example has drifted from utilitarianism to being goal-oriented. Goal-oriented is "I want to hit someone to make them mad". Yes, in that case, a larger sample will increase the likelihood of achieving your result.

    It's a completely separate topic from utilitarianism where you claim you want to maximize utility. To maximize utility, you have to operate without restrictions on time. So what you have to do is compare the positive ramifications of your choice, to the negative ramifications, throughout eternity.

    An example: you see some deer that are starving, so you give them food. They get hungry again, and return, and you give them food. You think that the food you share with them is maximizing utility, and keeping them alive.

    Ok, so you keep them alive, without knowing whether or not they would have actually died without your help. They end up raising offspring, and through your feeding, do better than the wild deer, and become more domesticated.

    Then something happens to you, and you can't take care of them, and they've forgotten how to forage, and are too trusting of strangers, so they all die. Since they were the best-fed, the predators had been picking off the weaker, wild-foraging deer all this time. Now, all the deer are gone.

    That's an example of how what appears to be a positive ramification is actually occurring at the expense of others. In reality, this example killed the entire species in order to feed a few, which is a poor outcome.(cont)

    1. Josak profile image59
      Josakposted 10 years agoin reply to this

      I already easily and completely covered this for you.

      To argue this point you must prove that a random action has a better chance of creating the predicted effect than a calculated one in most cases.

      It is no revelation to anyone that sometimes actions do not have the consequences one predicted. But for sane individuals they usually do otherwise you would not do anything because you would not believe the consequence would be predictable.

      i.e. You would not work because you would not predict being paid.

      Again, the ONLY way you can reasonably argue this case is to prove that a random action has a better chance of creating the predicted outcome in most cases.

      All you are doing is providing exceptions that prove the rule.

      1. profile image53
        AnalogousMethodposted 10 years agoin reply to this

        No, you covered a completely different thing. It's not about achieving a predicted effect. It's about weighing ALL of the effects, forever, that stem from that choice. You can't do it. It doesn't matter if your choice achieves the predicted effect(we aren't talking about single effects), if it also creates a much worse effect. If it does, then you have acted contradictory to your own morals. Look at the complexity again when you add a single extra variable, and you'll see that it's practically guaranteed that you will.

        Again, you are trying to compare singular goals over small time-frames to eternity and infinity. There is no comparison.

        I've really said everything I wanted to say about this, at least to you. You can contest it, but unless you want to discuss something other than what you've already said, there's no point.

        1. Josak profile image59
          Josakposted 10 years agoin reply to this

          You can never weigh all the consequences of any action accurately, does that mean you should never do anything?

          Going to work might get you hit by lighting form the blue and killed, but probably will be positive.

          And again you are repeating the fallacy that only short term consequences are considered. The examples I gave used short term consequences only because that is faster and easier for purely metaphorical excersizes.

          1. profile image53
            AnalogousMethodposted 10 years agoin reply to this

            No, it doesn't mean you should never do anything, unless you are a utilitarian. If you are a utilitarian, then you have to recognize that the ripples of any action you take are almost mathematically guaranteed to cause problems greater than you can possibly imagine, eventually.

            Going to work also might make you crash into an FBI vehicle that is chasing a terrorist who has a nuke that would have been caught if it weren't for you, but instead the terrorist starts a world war and humanity is ended.

            The further out into eternity you extrapolate any action, the more likely it is going to have a horrible, horrible effect, and the more likely you are going to be doing exactly opposite of what you actually want to do.

            Mathematically, utilitarianism doesn't hold up. It could only be followed with perfect knowledge of everything. Without that, you are only guaranteed to add to the complexity and cause greater problems.

    2. Josak profile image59
      Josakposted 10 years agoin reply to this

      Look this is simple, just answer these and the follow up questions honestly:

      Have you ever worked for profit?

      If so why?

      1. profile image53
        AnalogousMethodposted 10 years agoin reply to this

        Yes. Because I want money so I can live and do things.

        Your questions are unrelated to the topic of moral utilitarianism. I don't even believe it is possible to honestly attempt to maximize utility for others.

        1. Josak profile image59
          Josakposted 10 years agoin reply to this

          Good then you made a long term prediction that going to work would allow you to live and do things even though there is no possible way for you to calculate all the possible results which are so myriad as to boggle the very mind. As it turned out you were probably right and you are now living and doing things with that money, congratulations on your successful prediction.

          SO you did exactly what a utilitarian does. I am glad we agree. 

          That principle you used is applied to all decisions for the utilitarian and as we already covered affecting more people does not necessarily mean a less predictable income.

          1. profile image53
            AnalogousMethodposted 10 years agoin reply to this

            Lol, no Josak. My decisions have nothing to do with trying to maximize utility in a society. Absolutely nothing. I never claimed to make that kind of evaluation. But, to be a utilitarian, you have to make those evaluations. Otherwise, you're not being true to your ideals. You wouldn't just do something, having no idea if it will hurt or help, would you?

            I can guess the sun will come up tomorrow, but that has nothing to do with trying to figure out the eternal ramifications of, say, having a welfare state like ours.

            I'm sorry, I don't believe you're interested in a real discussion. It's like everything I've said you just ignore or twist. There is no comparison between thinking the sun will rise and thinking you know how to maximize utility of resources in society.

            Your continual mischaracterization of what I say and what my morals are make it seem like you don't really care. I'm sorry for that, because I enjoyed thinking about this, and wanted to enjoy discussing it.

  2. profile image53
    AnalogousMethodposted 10 years ago

    Now, when evaluating the effects of doing something vs. not doing something, throughout eternity, we run into some very serious problems.

    Imagine we have a bag of a million marbles. We pour them out and try to predict all the collisions and where everything will end up. Hard to do, right? Mathematically, it's impossible.(for simplification. It's not actually impossible, but it approaches impossible, which is effectively the same thing. You can't divide by zero, but you can divide by approaching zero, and that way you can actually get the result of dividing by zero, for all intents and purposes).

    Now, you think you know where all those marbles are going to end up(you don't, it's impossible), and you think that it's going to be bad(you don't know that), and you think that you can influence those events to be better(you don't know that either).

    So, when the marbles are pouring out, you take an extra marble and throw it into the stream. Now, your one marble has the opportunity to affect a million other marbles, and the effect that it has on each marble can affect each of the other million marbles, and each of those effects can affect each of the other million marbles, and so  on and so forth, forever. So, over a timeframe t where the new marble has influenced a million marbles, directly or indirectly, you now have squared the complexity of the simulation. And for each time t that you progress through the simulation, the complexity squares again, and again, exponentially. In other words, you just made an impossible decision impossibly more difficult.

    Since you have increased the number of possible interactions for every timeframe through infinity, you are effectively infinitely more likely to cause greater damage than whatever short-term benefit you might succeed in bringing about.

    So, to try and feed 98, you could be starving a trillion.

  3. profile image53
    AnalogousMethodposted 10 years ago

    To return to the analogy of the men on the island(I know it was 50 before, we'll call it 100 now, and include women).

    50 men and 50 women on an island. 1 couple works hard gathering food while the others relax on the beach. They gather enough food for everyone. A benevolent leader decides it is most useful to distribute the food to everyone so nobody starves, because the 1st couple can't eat it all anyway. It will spoil.

    So, the next day, the 1 couple works again, and everyone else plays because they know they will be fed.

    This continues, until the 1st couple gets fed up and stops working. Suddenly nobody has any food. The next day, some try to find their own food, but they are already too weak from a day in the sun.

    The entire population dies.

    This is one possibility, but it's an example of how a short-term benefit can have far worse results in the long-term.

    To contrast, what if the leader doesn't distribute the food? The next day, the other 49 couples realize they will die if they don't work for some food. They are weak, but the 1st couple sees them trying and helps them out. Everyone eats, and learns a good lesson.

    Over time, this population of 100 advances, grows, and soon becomes a large civilization of billions of people.

    Because you thought it best to feed the 98, you could have sacrificed the entire civilization, and a great future.

    When you consider these examples(there are an infinite more number of ways these situations could play out), and contrast that to the complexity of evaluating the effect of a decision, it becomes clear that not only is it impossible to determine the most effective use of resources, but trying to add complexity to an already existing situation is almost guaranteed to eventually cause a problem that is far, far worse.

    1. Josak profile image59
      Josakposted 10 years agoin reply to this

      Again you:

      Must prove that a random action has a better chance of creating the predicted effect than a calculated one in most cases.

      Now you have introduced a new fallacy that these calculations should be done on the basis of short term benefit only.

      1. profile image53
        AnalogousMethodposted 10 years agoin reply to this

        No, I specifically said they should be done on long-term, not short-term. Did you even read it?

        The "predicted effect" is maximizing utility. That's the goal. By adding complexity, you are guaranteed to cause a problem worse than whatever "predicted effect" you think you are going to bring about.

        1. Josak profile image59
          Josakposted 10 years agoin reply to this

          Nope you didn't understand what I said, maybe I wasn't clear.

          I am specifically stating that utilitarian decisions are not made with only short term views in mind like you are positing they are made by logically balancing long and short term.

          1. profile image53
            AnalogousMethodposted 10 years agoin reply to this

            But they don't consider the long-term. Please, show me a single one that considers the ramifications of an action through eternity. Just one.

            If you do, then I'll show you one that couldn't possibly be accurate.

            That's the point, you can't balance them. it's impossible to evaluate. And again, given the exponential complication that comes from adding variables, you are almost guaranteed to cause a problem far greater, actually infinitely greater, than whatever problem you hoped to solve in the first place.

            Every action a person makes has the potential to end humanity completely.

            1. Josak profile image59
              Josakposted 10 years agoin reply to this

              Any decision should be made with the long term consequences of the decision in mind.

              For example I made the conscious decision to become a US citizen, I thought this would allow me to live with my wife in her home country, raise children and offer her and them a security my home country could not due to the genocidal dictatorship it had. As it turned out I was correct and it worked out just fine.

              Instead just after I became a citizen a huge war might have broken out and anyone with military experience might be conscripted, I would then be sent to war and possibly killed. This was less likely so I did not let it change my decision.

              It is no revelation to anyone that sometimes actions do not have the consequences one predicted. But for sane individuals they usually do otherwise you would not do anything because you would not believe the consequence would be predictable.

              Your position is utterly untenable because you regularly make predictions in your own life on exactly this basis even though the range of consequences is massive.

          2. Josak profile image59
            Josakposted 10 years agoin reply to this

            TO illustrate:

            You go to work because you predict this will get you paid, it does. You want this money to buy a new house, you continue to work and achieve this goal, when you are moving you become friends with the man you hire to help you move, eventually you go to his house for a dinner party, there you meet a female friend of his and fall for her, you move in together and this leads to her ex-husband killing you both in a fit of rage.

            In this example going to work was a bad idea. But the vast majority of the time the outcome will be a positive one, hence you weigh these likelihoods and go to work.

            1. profile image53
              AnalogousMethodposted 10 years agoin reply to this

              I'm going to go to work anyway. That's one variable. Additionally, there is another aspect I haven't covered which is harmony, but this is a good time to illustrate it.

              Let's say we are both going to work. We live in opposite towns and work in each others' towns. On a normal day, we both go to work, we both drive along with traffic, and the traffic leaving each town equals the traffic coming in. There is a natural balance and harmony that exists. Peoples' desires for efficiency cause these harmonies, or efficiencies, to occur all the time. Think of it as pouring the marbles down a track, it's much less chaotic than just pouring them on the floor.

              But, if you decide to do something different, and stay home, then suddenly you are going to be having all these interactions with the marbles coming into your town. What is normally harmonic suddenly turns chaotic. Collisions cause more collisions.

              When you choose to do something that is different from the status quo, you add a marble(variable) to the entire equation, but you also are going to upset harmony.

              Since you can't possibly balance all the ramifications of a redistribution choice, it is illogical to attempt to do so, especially knowing that you make things more complicated and more chaotic.

              1. Josak profile image59
                Josakposted 10 years agoin reply to this

                But that is exactly what you are doing when you go to work predicting the most likely outcome in a series of events you could not possibly completely predict completely.

                Otherwise you would not go to work.

                You are doing exactly the thing you are criticizing.

                1. profile image53
                  AnalogousMethodposted 10 years agoin reply to this

                  I'm not a utilitarian. I don't try to maximize utility, affecting others by force. How could I, knowing 100% that I have no idea if I would be making things better or worse?

                  I'm not doing what I'm criticizing. Not in the least. I'm just rolling along on my own. You're the one who wants to keep throwing marbles at the infinite stream to make it work better.

  4. profile image53
    AnalogousMethodposted 10 years ago

    Your definition of long-term is wrong. Very, very wrong.

    You're not thinking on the scale of forever. You're thinking on the scale of no-passage-of-time.

    My position isn't untenable because I'm not a utilitarian. I accept that I can't weigh the eternal consequences of my actions.

    You can't either, but you still think you can somehow maximize utility for others. Screw your own life up if you want, but why would you include others?

    You still think you can decide that it will be more beneficial to redistribute the food to the other 49 couples, while you admit you have absolutely no idea about the ramifications of that action through eternity.

  5. profile image53
    AnalogousMethodposted 10 years ago

    Josak, I really don't think you've understood the points I've been trying to make.

    Try and look at them objectively. Try to look at the analogies and examples from the position

    "Is it possible to evaluate a situation in such a way that you can influence it for net-good, taking into consideration the ramifications of any actions, extended forever?"

    Realizing that any action you take will only add complexity and chaos.

  6. profile image53
    AnalogousMethodposted 10 years ago

    One last thing about time frames.

    You talk about having a reasonable expectation of getting a paycheck, or enjoying freedom in the US.

    Yes, we can have reasonable expectations over short time-frames. Your entire life is a short time frame. The existence of the US is a short time-frame. Consider the instability of nations over the last 2000 years. A stable country for 200 years is considered "good" to us.

    Realize, if we ever become a galactic species, then humanity would be most likely to survive for trillions of years. Trillions. If multi-verse theory is true, it could be indefinite.

    So the kind of time-scales that you talk about being 'reasonable expectations' are so small, that we could compare them like this:

    "Reasonble time-frame": The width of this period: .
    Truly long-term: 5 trips around the world.
    Possible long-term: From here to the edge of the universe, a trillion times over.

    So how much of a reasonable expectation do we really have of an expected result happening long-term? How much of a reasonable expectation do we really have that the USA will survive another 100, 200, or even 1000 years?

    When you consider utilitarianism from the viewpoint of true long-term, it is untenable.

    The only way it would work would be if you had perfect data about everything, forever. I doubt that's going to happen smile

    1. Josak profile image59
      Josakposted 10 years agoin reply to this

      You are just expanding the same issue by making it longer, it doesn't matter how long the time period is  it will (most of the time) be easier to predict than to randomly guess out of the "near infinite" possibilities.

      1. profile image53
        AnalogousMethodposted 10 years agoin reply to this

        Yes, it does matter how long it is! If you can only "reasonably predict" this far: .

        without knowing at all the effects that it will have from here to the edge of the universe and back,

        then you haven't evaluated the decision! If you don't evaluate the decision, then you aren't rationally trying to maximize utility. You're trying to do what you think is best short-term, screw the consequences.

        How can you not understand that?

        1. Josak profile image59
          Josakposted 10 years agoin reply to this

          You absolutely can. Let me give an example, I am a humanist, as such I think the most important legacy we have is what our history teaches those who come after therefore I think it's important to live our lives in a way that leaves a positive message for future generations, that is why I risked my life to fight tyranny  or believe in the politics I do not just because they are good for people now but because in our histories they will leave a positive message.

          When Momoro said "Liberté, égalité, fraternité" in 1839 he made very little change in his time. But years after that would for the basis for the French Republic and that message will echo in human history forever, across the stars and to the ends of the universe and back (to wax poetic). In the same way when we make a political decision now the ethics of that choice will too.

          I believe that a message of compassion for example (universal healthcare) will echo throughout the ages  forever so when I say I think it's a good idea I say that with the prediction that it will be good for humanity throughout "eternity" (though I do not believe humanity is eternal).

          So far from "screw the consequences" I believe the very opposite.

          You misunderstand my views obviously. Don't assume.

          1. profile image53
            AnalogousMethodposted 10 years agoin reply to this

            Ok, you believe that, fine. I believe that coddling people hurts them just as much as feeding wild animals hurts them.

            Where is your evidence that it will happen? Remember, you claimed that this is a logical, emotionless, mathematically moral framework. You must have gone through some logical process to determine that it is more likely that the echoes of compassion will be net-beneficial to society.

            Why don't you think humanity is eternal? All we have to do is last long enough to make it off this rock, and we'll no longer be under the threat of eliminating ourselves with our most devastating weapons. One nuclear war on earth would destroy humanity, but if we are in different planets, moons, and galaxies, that's not going to happen. Do you have any logical reasoning to think we won't last as long as the universe?

            1. Josak profile image59
              Josakposted 10 years agoin reply to this

              "Dave Raup and Jack Sepkoski claim there is a mysterious twenty-six-million-year periodicity in elevated extinction rates. Based on past extinction rates, Raup and others infer that the average longevity of a typical vertebrate species is 2-4 million years. However, generalist, geographically dispersed species, like humans, may have a lower rate of extinction than those species that require a particular habitat."

              I am unconvinced we will live long enough to get off this Earth sustainably and if we do then who knows what universe wide catastrophes there are we have no idea about (or just genocidal aliens). Please note I hope you are right and I am wrong.

              Why do I believe this? is a simple question but requires a massive justification of my entire political belief system and proof of every precept in it. I can write that but would you even want to read it?

              Brutally inadequately horribly short answer: I believe as an ardent student of history that ideas that promote utilitarianism have  been the most historically successful and on that basis of an admittedly small sample base (4000 years at best) I think such a trend continuing is the most likely outcome.

              I believe the scientific method has been the biggest driver of human advancement and as such an ethical system which uses it is likely to be too.

              I referenced earlier the collapse of the Roman empire as an illustration of the difference between "big government" utilitarian ideals like a welfare state and the medieval period without them as an example.

              1. profile image53
                AnalogousMethodposted 10 years agoin reply to this

                Ok, fair enough. I don't believe that the data set is anywhere near large enough to get useful data, nor do I think any of the historical issues are as simple as anyone thinks(after all, history is usually written by a biased, victorious side).

                As for humanity, well I think we're only about 80 years away from being able to sustain life off the earth indefinitely. Realistically, we can do it already, but just not on a practical scale yet. I really don't think anything is going to destroy humanity in the next 80 years, but a world war/nuclear war could do it or set us back.

                If we succeed in that, then we have an amazing chance. True, we don't know about everything that exists, but there are three scenarios that are most likely.

                1 - The speed of light is constant and the limit, and there are no tricks for bridging space-time. If that's the case, then we will probably be limited to colonizing our galaxy, and it will take a very long time. I don't know of any stars close enough to us that are supposed to supernova in that timeframe, so we should be safe. If that's the case, it also means that we probably could never meet up with alien life in the universe(it probably exists). The distances are just too great.

                2 - We find a trick. If that's the case, then no natural event short of the destruction of the universe would be likely to end humanity. Aliens, possibly, but any trans-galactic species would likely have had to learn peace to survive the technological advances of nuclear fusion, fission, and the immense energies needed to bridge space-time.

                3 - Multiverse is true, and all expectations go out the window. We learn to create our own universes.


                Either way, humanity could probably survive for a very, very long time.

                What I think is sad, is that I would bet that we have aging figured out in about 100-200 years from now. Far enough away that those of us alive today probably won't be around for it, but maybe our kids/grandkids. That would be an amazing time to live, being told you could stick around as long as you want, and go out exploring the universe.

                My greatest hope is that those discoveries come in time for me to see.

                Lol, I think I started rambling.

                1. Josak profile image59
                  Josakposted 10 years agoin reply to this

                  Far be it from me to criticize optimism, I completely hope you are correct.
                  I will say that species, nations etc. have a way of seeing themselves as more cosmically significant and long lasting than they actually are.  Rome believed it would be forever.

                  4000 years is a pretty hefty sample to simply dismiss, as for it's veracity well there are definitely issues with bias but at the same time if utilitarian and progressive systems are being favorably biased by history being written by the winners it implies such systems triumph and that in turn means they are doing something right.

  7. profile image53
    AnalogousMethodposted 10 years ago

    One last try, because I'm hard-headed.

    You admit that there are often unintended consequences.

    Now think of the time scale again. Everybody knows, with only '.' that much experience in time, that there are often unintended consequences of an action. Everybody knows it!

    If it happens soooo much in just '.' that much time, then what kind of unintended consequences do you think there could be when you stop thinking in terms of '.' and start thinking in terms of a line that extends from here to the edge of the universe?

    How can anybody honestly say they think they can make a change that will have a net-good effect over the next trillion trillion trillion years? We have a hard time predicting what the climate will be in a decade, with the smartest people on the planet working together on it.

    So, I answered your questions, can you answer some for me?

    Do you actually think you can evaluate the consequences of a decision that affects other people, utilitarian style(taking away his fish to feed them), sufficiently to say that the net result (time independent) will be positive?

    If no, then why would you try to maximize utility if you admit you have no way of knowing whether or not an action will maximize utility?

    Or, is the extremely short-term somehow more important to you?

 
working

This website uses cookies

As a user in the EEA, your approval is needed on a few things. To provide a better website experience, hubpages.com uses cookies (and other similar technologies) and may collect, process, and share personal data. Please choose which areas of our service you consent to our doing so.

For more information on managing or withdrawing consents and how we handle data, visit our Privacy Policy at: https://corp.maven.io/privacy-policy

Show Details
Necessary
HubPages Device IDThis is used to identify particular browsers or devices when the access the service, and is used for security reasons.
LoginThis is necessary to sign in to the HubPages Service.
Google RecaptchaThis is used to prevent bots and spam. (Privacy Policy)
AkismetThis is used to detect comment spam. (Privacy Policy)
HubPages Google AnalyticsThis is used to provide data on traffic to our website, all personally identifyable data is anonymized. (Privacy Policy)
HubPages Traffic PixelThis is used to collect data on traffic to articles and other pages on our site. Unless you are signed in to a HubPages account, all personally identifiable information is anonymized.
Amazon Web ServicesThis is a cloud services platform that we used to host our service. (Privacy Policy)
CloudflareThis is a cloud CDN service that we use to efficiently deliver files required for our service to operate such as javascript, cascading style sheets, images, and videos. (Privacy Policy)
Google Hosted LibrariesJavascript software libraries such as jQuery are loaded at endpoints on the googleapis.com or gstatic.com domains, for performance and efficiency reasons. (Privacy Policy)
Features
Google Custom SearchThis is feature allows you to search the site. (Privacy Policy)
Google MapsSome articles have Google Maps embedded in them. (Privacy Policy)
Google ChartsThis is used to display charts and graphs on articles and the author center. (Privacy Policy)
Google AdSense Host APIThis service allows you to sign up for or associate a Google AdSense account with HubPages, so that you can earn money from ads on your articles. No data is shared unless you engage with this feature. (Privacy Policy)
Google YouTubeSome articles have YouTube videos embedded in them. (Privacy Policy)
VimeoSome articles have Vimeo videos embedded in them. (Privacy Policy)
PaypalThis is used for a registered author who enrolls in the HubPages Earnings program and requests to be paid via PayPal. No data is shared with Paypal unless you engage with this feature. (Privacy Policy)
Facebook LoginYou can use this to streamline signing up for, or signing in to your Hubpages account. No data is shared with Facebook unless you engage with this feature. (Privacy Policy)
MavenThis supports the Maven widget and search functionality. (Privacy Policy)
Marketing
Google AdSenseThis is an ad network. (Privacy Policy)
Google DoubleClickGoogle provides ad serving technology and runs an ad network. (Privacy Policy)
Index ExchangeThis is an ad network. (Privacy Policy)
SovrnThis is an ad network. (Privacy Policy)
Facebook AdsThis is an ad network. (Privacy Policy)
Amazon Unified Ad MarketplaceThis is an ad network. (Privacy Policy)
AppNexusThis is an ad network. (Privacy Policy)
OpenxThis is an ad network. (Privacy Policy)
Rubicon ProjectThis is an ad network. (Privacy Policy)
TripleLiftThis is an ad network. (Privacy Policy)
Say MediaWe partner with Say Media to deliver ad campaigns on our sites. (Privacy Policy)
Remarketing PixelsWe may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites.
Conversion Tracking PixelsWe may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service.
Statistics
Author Google AnalyticsThis is used to provide traffic data and reports to the authors of articles on the HubPages Service. (Privacy Policy)
ComscoreComScore is a media measurement and analytics company providing marketing data and analytics to enterprises, media and advertising agencies, and publishers. Non-consent will result in ComScore only processing obfuscated personal data. (Privacy Policy)
Amazon Tracking PixelSome articles display amazon products as part of the Amazon Affiliate program, this pixel provides traffic statistics for those products (Privacy Policy)
ClickscoThis is a data management platform studying reader behavior (Privacy Policy)