Some Final Thoughts on Terrorism

Over the past considerable amount of posts, I’ve focused on a topic that has interested me for quite a while, terrorism. It was a big part of my National Security class I took last year and I though I would follow up on some interesting and hopefully insightful research. But now after some consideration, I want to focus on other topics; constitutional law, economics, and foreign policy to name a few prominent ideas that I have in my head.

But first, I wanted to share some final thoughts that I had on terrorism; primarily what it is and how do we solve it (if we can). The War on Terror has been the era in which I grew up and am still growing up in. I’ve lived through 9/11, Afghanistan, and Iraq, and now over the past few years, a wave of terrorism in the west conducted by the Islamic State of Iraq and Syria (ISIS).

What I find interesting is how violent attacks are categorized, as after some major attack, the police often tell the public that they “will not rule out terrorism” as something that motivated a particular attack. But what separates terrorism from other acts of violence? One could say that a terrorist attack is one that intends to create fear to achieve a political, religious, or some ideological aim, but in the field of terrorism studies, the definition is much more vague.

In a historical context, the term “terrorism” can trace its linguistic roots from the French word “terrorisme” which comes from the Latin word “terror” (great fear).[1] As Myra Williamson notes, “During the reign of terror, a regime or system of terrorism was used as an instrument of governance, wielded by a recently established revolutionary state against the enemies of the people. Now the term “terrorism” is commonly used to describe terrorist acts committed by non-state or subnational entities against a state.”[2]

Simon reported in 1994 that there are at least 212 different definitions of terrorism across the world, with 90 of them being used by governments and other institutions.[3] Several years previously, Schmid and Jongman compiled over 100 academic and official definitions of terrorism to identify shared components, and discovered the following:

(1) Concept of violence: 83.5% of definitions
(2) Political goals: 65%
(3) Causing fear and terror: 51%
(4) Arbitrariness and indiscriminate targeting: 21%
(5) Victimization of civilians/noncombatants/neutrals/outsiders: 17.5%.[4]

What this shows is that there are some elements that one would think should be shared by many definitions (like the bottom two on the list above), but are only a minority concept in the vary array of definitions. It is because of this loading of conceptual problems that a totally accepted definition is non-existent.

To give one instance of conflicting definitions, in Israel/Palestine, a public opinion poll conducted in December 2001 surveyed Palestinian reactions to two events of what are widely called terrorist acts. 98.1% of the Palestinians surveyed agreed or strongly agreed that “the killing of 29 Palestinians in Hebron by Baruch Goldstein at al Ibrahimi mosque in 1994” should be called terrorism, while 82.3% of the same respondents disagreed or strongly disagreed that “the killing of 21 Israeli youths by a Palestinian who exploded himself at the Tel Aviv Dolphinarium” should be called terrorism.[5]

In the post-9/11 world, the challenge of producing a coherent definition arguably worsened and still persists to this day. Alex Schmid’s two editions of books (published in 1084 and 1988) trying to find a coherent definition of terrorism all but brought success as he was still searching for a broadly accepted and reasonably comprehensive explication.[6] In 2011, Sshmid ended up updating his definition using 12 distinct points, chiefly among them being that “terrorism refers, on the one hand, to a doctrine about the presumed effectiveness of a special form or tactic of fear-generating, coercive political violence and, on the other hand, to a conspiratorial practice of calculated, demonstrative, direct violent action without legal or moral restraints, targeting mainly civilians and non-combatants, performed for its propagandistic and psychological effects on various audiences and conflict parties.”[7]

Attempts to define “terrorism” on an international scale has made little process. In briefing the Australian parliament, Angus Martyn explained that in the 1970s and 1980s, the U.N. attempted to develop an accepted comprehensive definition of terrorism, but it failed primarily due to differences of opinions about “the use of violence in the context of conflicts over national liberation and self-determination.”[8] As a result of the above-mentioned factors, international law professor Ben Saul wrote in argument for an all-encompassing definition of “terrorism”, saying that “If the law is to admit the term [terrorism], advance definition is essential on grounds of fairness, and it is not sufficient to leave definition to the unilateral interpretations of States. Legal definition could plausibly retrieve terrorism from the ideological quagmire, by severing an agreed legal meaning from the remainder of the elastic, political concept.”[9]

But what can we, as a community of people, governments, etc., do to stop terrorism? History tells us that terrorist groups have ended in several ways. Jones and Libicki did a study of all the active terrorist groups they could find between 1968 and 2006, of which were 648, where as of 2006, 136 splintered and 244 were still active. Of the 268 that ended, 43% converted themselves to non-violent political actions, the mot famous case being the IRA after the Good Friday Agreement), with 40% taken out by policing and intelligence services, victory of the group occurring 10% of the time, and the remaining 7% ending due to military force.[10]

Researcher Audrey Cronin lists three further primary ways that terrorist groups end, which are as follows:

(1) Capture or killing of a group’s leader (Decapitation)
(2) Group implosion or loss of public support (Failure)
(3) Transition from terrorism into other forms of violence (Reorientation)[11]

What is interesting and important to note with the Jones and Libicki study is that, when it comes to a group’s ending via military force, it is most effective when the group is an insurgency; large, well armed, very lethal, well organized.

Their quantitative analysis found several interesting findings, including the following:

(1) Religious terrorist groups take longer to eliminate than other groups. Approximately 62% of all terrorist groups have ended since 1968, but only 32% of religious terrorist groups have ended.
(2) Religious groups rarely achieve their objectives. No religious group that has ended achieved victory since 1968.
(3) Size is a significant determinant of a group’s fate. Big groups of more than 10,000 members have been victorious more than 25 percent of the time, while victory is rare when groups are smaller than 1,000 members.[12]

In today’s age, as the Global Terrorism Database shows, (numbers are for 2015, data for 2016 has not been produced yet), 7 out of the top 10 deadliest terrorist groups are motivated by radical Islam (ISIS, Boko Haram, the Taliban, al-Shabaab, Houthi extremists, al-Nusrah Front, and the Sinai Province of ISIS). Indeed, many of these most lethal attacks and attacks in general are against other Muslims, but ISIS and these other radical Islamic groups brushes them aside as (1) hypocrites and not true Muslims and (2) people who vote in the democratic process that ultimately authorizes what they call the “War on Islam.”[13]

But what is the best way to deal with these groups? In the case of ISIS (including its Sinai Province), western journalist turned hostage John Cantlie, writing for the ISIS English-language magazine Dabiq, wrote that negotiation is possible and “a truce with Western nations is always an option in Shari’ah law.”[14] In the editor’s note of another article dealing with the same subject, which was also written by John Cantlie and stressed negotiations and what he argues as the inevitability in accepting ISIS as a legitimate state, ISIS has noted that “a halt of war between the Muslims and the kuffar [non-Muslims] can never be permanent, as war against the kuffar is the default obligation upon the Muslims only to be temporarily halted by truce for a greater shar’i interest.”[15] But ISIS also gave another option, saying that they if one does not submit to the authority of Islam by becoming Muslims, they can submit “by paying jizyah, for those afforded this option, and living in humiliation under the rule of the Muslims.”[16]

In essence, ISIS cannot be negotiated unless it is on their terms, this also, in part, includes Boko Haram, known as their West Africa Province (Wilayat Gharb Afriqiyyah), although they have since mid-late 2016, the group split after ISIS appointed Abu-Musab al-Barnawi as the new leader of the wilayat. Its former leader Abubakar Shekau refused to accept al-Barnawi’s appointment and the group split.[17] From what I could gather, apparently this non-ISIS related faction lead by Shekau has been at least willing to negotiate, in the case of the releasing of the Chibok schoolgirls kidnapped in 2014.[18]

Peace negotiations with the Taliban[19], al-Shabaab[20], and Houthi extremists[21] have made some progress by varying degrees, they have all had some difficulties along the way due mostly to regional/local politics and the demands of some groups. It should be noted that al-Nusrah Front was dissolved earlier this year and was renamed as Hay’at Tahrir al-Sham, whose position on negotiations with the Syrian government would be akin to, in their words, “suppress[ing] the revolution and crown[ing] the butcher [Assad].”[22]

In essence, it is possible to negotiate with these groups, but the success or failure of such negotiations remains to be determined, while there should be other options on the table should they fail.

[2]: Myra Williamson, Terrorism, War and International Law: The Legality of the Use of Force Against Afghanistan in 2001 (Farnham, UK: Ashgate Publishing, 2013).
[3]: Jeffrey Simon, The Terrorist Trap (Bloomington, IN: Indiana University Press, 1994).
[4]: Alex Schmid and Albert Jongman, Political Terrorism: A New Guide to Actors, Authors, Concepts, Data Bases, Theories, and Literature Amsterdam, NL: Transaction Books, 1988).
[6]: Bruce Hoffman, Inside Terrorism: Second Edition (New York City, NY: Columbia University Press, 2006).
[9]: Ben Saul, “Defining ‘Terrorism’ to Protect Human Rights,” Sydney Law School Legal Studies Research Paper, No. 08-125 (2008).
[11]: Audrey Cronin, How Terrorism Ends: Understanding the Decline and Demise of Terrorist Campaigns (Princeton, NJ: Princeton Universty Press, 2009).
[14]: Dabiq, Issue 12
[15]: Dabiq, Issue 8
[16]: Dabiq, Issue 15

Terrorism in the West and Immigration: A Case Study

There has been a lot said about terrorism and immigration, with some claiming that immigrants and refugees rarely, if ever commit terrorism, at least in the case of radical Islamic terrorism. Curious about this claim, I decided to do some research into this over the past week to see if this claim had any merit. You can view my research in the link below.

I based this study of radical Islamic terrorist attacks that happened between May 24th 2015 and May 24th 2017 in Europe, Canada, and the United States, with particular attention paid to perpetrator’s native status in relation to the country he or she carried out the attack in and, of applicable, whether or not the perpetrator was a first or second- immigrant. I would like to note that there were some gaps in the information, labeled with “unknown” if such piece of information is not known and a question mark of there is some information indicating a particular piece of information to be correct, but still does not say it outright.

In this time frame, radical Islamic terrorists killed 434 people, and injured 1,707 with the majority of deaths and injuries coming from just 3 attacks; Nice (July 2016), Orlando (June 2016) and Paris (November 2015).

Based on native status alone, those who were native [in relation to the country he or she carried out the attack in] killed 132 (30.4%) and injured 347 (20.3%). On the flip side, non-natives killed a total of 124 (28.6%) and injured 619 (36.3%), while those with mixed identities (this category was for those with multiple perpetrators of different native statuses) killed 177 (40.8%) and injured 735 (43.1%), and those with an unknown native-status killed 1 (0.2%) and injured 6 (0.3%).

On the surface when only comparing native and non-native perpetrators, natives have a higher death count and those with mixed statuses have both a higher death and injury count. There is another interesting factor when it comes to natives, however, and that is their immigration generation status, be they first-generation immigrants (the ones who immigrants to a country) or second generation (their parents immigrated).

Without regard to native status, first-generation immigrants killed 125 (28.8%) and injured 615 (36%), second-generation immigrants killed 115 (26.5%) and injured 265 (15.5%), and a mixture of the two killed 144 (33.2%) and injured 392 (23%). The total for these categories are that first and second-generation immigrants killed 384 (88.5%) and injured 1,272 (74.5%). Those that were native and not a first or second-generation immigrant (i.e., purely native in regards to birth and ancestry) killed 11 (2.5%) and injured 74 (4.3%).

It is therefore a reasonable conclusion that radical Islamic terrorists that are non-native or are native AND a first or second-generation immigrant are the ones that cause the most casualties, but why is this? Part of this, I argue, has to do with integration, or lack of it.

From this, it can also be concluded that the reason natives do not kill as many is because of a sociological attachment to their country, although as far as I have seen, there is no literature on the subject. Although first-generation immigrants, as I showed in a previous post on this blog, are prone to commit less crime than natives, this does not seem to hold true for radical Islamic terrorism as evidenced by the data above.

In regards to second-generation immigrants, their identity is relatively unknown, as they are study between the identity (or identities) of their parents country and creating a new identity from the country they were born into. In general, there are “unique assimilation experiences and challenges faced by the children of immigrants”[1],

In the French/European context, Olivier Roy noted that “there is no such thing as third or fourth-generation jihadis [in France] […] This phenomenon of new jihadis in Europe is primarily a generational revolt. […] They [second-generation Muslim immigrants] [join the second-generation immigrants’ “estranged” Islam, which manifests in a generational, cultural and political rupture. In short, there is no point in offering them a moderate Islam. Radicalism attracts them by definition”[2] and “are raised in an environment where they will constantly be torn between the heritage and host culture of what they learn in the home and at school.”[3]

Sandra Bucerius notes that “research findings in European countries indicate that some second-generation immigrant groups have crime rates that drastically exceed those of the native-born population.”[4]

Ronald Inglehart and Pippa Norris argue that if the multiculturalism thesis is correct, then “any significant cultural differences among majority and minority populations are not be expected to diminish among second and third generation migrants; indeed if alienation from the West has occurred, as some observations suggest, then this could even potentially strengthen Muslim identities among younger populations.”[5]

Whether or not it is a lack of proper identity resulting in rebellion or the strengthening of religious identities as the result of alienation, these Muslims are targeted by extremist groups like ISIS as the result of this vulnerability. It is likely that among both first-generation and second-generation Muslim immigrants, the events going on in the wider Muslim world in regards to the West’s military operations.

When they see their countries, or the countries of their parents, attacked, it provokes a response in these young Muslims, which ISIS uses in attempt to encourage attacks abroad by Muslims residing in their own countries, as evidenced by a recent video published in mid-May by the Media Office of Wilayat Ninawa. In the video, American ISIS fighter Abu Hamza Al-Amriki [Abu Hamza the American] asked Muslims in the United States “does it not pain you to see your brothers with their honor having been violated, and their bodies having been torn into pieces by the American airstrikes and their destructive weapons?”[6]

Final note: When I started this, I wanted to solely get a three-year time frame and not deviate from that goal. After the two-year time frame however, there were two other attacks, one in London on June 3rd, and one in Paris on June 6th. The attackers in these two attacks, Khuram Butt [7] (born in Pakistan), Rachid Redouane [8] (born in Morocco or Libya), Youssef Zaghba[9] Born in Morocco), and Farid Ikken[10] (born in Algeria, moved to Sweden) were all foreign nationals.


European Immigration

While much has been said about this problem, I thought I should offer my own two cents. Immigration from the Middle East and northern Africa, especially concerning what happened in Germany lately, is concerning, to put it lightly. Its a hard fix, as I explained when I argued my own solution in a previous blog post, but it should be fixed nonetheless. Now, I do agree that we should help those that are in need, but at the risk that is in Europe right now after a year of constant ISIS attacks, its not really worth as much as it once was. This call for attacks in countries part of the global coalition against ISIS is a focal point of their ideology and propaganda efforts.

The other part of the problem is the quick pace of how these refugees entered Europe. The vetting process from the United Nations High Commissioner for Refugees is stringent, true, but what we don’t know for sure is the number of refugees mass emigrating to Europe were vetted by the UN. Furthermore, the risk of infiltration of the refugee flow by ISIS members on a statistical scale is low, and even then, they have highly encouraged immigrating to ISIS held-territory, not going to Europe, for in their mind, it is “a dangerous major sin.”[1] Furthermore, information from Frontex shows that between 2013 and 2015, there were 2,207,745 illegal border crossings from all main migration routes into the European Union (minus the Eastern Borders route between the EU’s eastern member countries and Russia, Belarus, Ukraine, and Moldova).[2] While I agree that we should help in some form, as I stated before, the too many refugees in too short a time span without much proper vetting presents a problem. Even then, there still is the possibility that some could be radicalized after the vetting process and once they’re in the country, something that could be stopped, and has produced warning signs within European intelligence organizations, but their infighting and other internal problems does not help with the situation.

In short, further immigration efforts to Europe should be conducted legally, with strict vetting, and quota limits should apply.

The risks for this are just too high for the current policy to continue. Consider the following information about jihadi attacks [mostly done by ISIS] in Europe over the past 2 years compiled by terrorism researcher Thomas Hegghammer.

Between 2014 and 2016, jihadi attacks killed 273 people, more than in all previous years combined (267). Between 2011 and 2015, almost 1,600 people were arrested in jihadism-related investigations in the EU (excluding the UK); an increase of 70% compared with the previous five-year period. In 2015 and 2016, there were 14 jihadi attacks, about 3.5 times more than the biannual average (6) for the preceding fifteen years, [there] were 29 well-documented attack plots, about 2.5 times more than the biannual average (12), [and about] half of the serious plots reached execution, compared with less than a third in the preceding fifteen years.[3]

[1]: Dabiq, Issue 11, Page 23

Denmark a Socialist Paradise?

I need to start posting more here.

So hopefully you all won’t mind some heavy reading to keep you wanting more after a two month absence.

In this installment, I am responding to an article posted by US Uncut, those people who are full of economic ignorance, that I don’t even know how to respond… until I did. The original article is here if you want to look at it first.

Here are 9 reasons Denmark’s socialist economy leaves the US in the dust

You done reading? Alright, time to do my thing. The three main differences between the United States and Denmark (this can also apply to the other Nordic countries) is as follows.

1): Currency exchange rates between the two countries. As of Sept. 25th, 2016 at 3:47am UTC, $1 US dollar was worth 6.64 Danish Krone.[1]

2): The Population is the starker contrast you see on the surface. The population of the United States is 317,780,510 as of 2014[2], while the population of Denmark is (as of 2014 as well), 5,627,235.[3] Thus, the United States population is 5,547.2% bigger than Denmark. I put this into perspective because when trying to emulate programs that other countries have, the United States should at least be cautious, due to the number of people involved and the amount of money spent. Denmark’s is less than the population of New York city (with a population of 8.5 million)[4] and a smaller population than (using 2015 numbers) 20 of our states here in the US.[5]

3): The culture differences between the two countries, specifically business culture and work ethic. In 2007, six economists from Denmark, Finland, and Sweden noted that, although Nordic culture is highly secular, it “is strongly influenced by the Lutheran faith, which gives prominence to a strong work ethic and solidarity between members of society (and even conformist pressures). The long history of independent farmers and the tradition of local self-governance are other features worth noting.”[6] Robert Pateman writes that this work ethic “This work ethic is established during childhood when Danish children are encouraged to find afterschool jobs, such as delivering newspapers of leaflets At the same time, it is very much part of Danish culture that work should not be allowed to dominate life.”[7]

The United States started out the same way, as Steven Malanga writes “sociologist Max Weber dubbed the qualities that Tocqueville observed the “Protestant ethic” and considered them the cornerstone of successful capitalism. Like Tocqueville, Weber saw that ethic most fully realized in America, where it pervaded the society. Preached by luminaries like Benjamin Franklin, taught in public schools, embodied in popular novels, repeated in self-improvement books, and transmitted to immigrants, that ethic undergirded and promoted America’s economic success.”[8] Mangala continues, “After flourishing for three centuries in America, the Protestant ethic began to disintegrate, with key elements slowly disappearing from modern American society, vanishing from schools, from business, from popular culture, and leaving us with an economic system unmoored from the restraints of civic virtue.”[9]

In 2010, a Pew Research Center report showed that Millennials are the only one that doesn’t cite “work ethic” as one of their principal claims to distinctiveness, whereas it showed up in the Gen X (11%) the Boomer generation (17%) and the Silent generation (10%). The reason for this is, as the Pew Research Center notes, is that “Millennials may be a self-confident generation, but they display little appetite for claims of moral superiority.”[10] Dan Schawbel from Forbes argues that “The pursuit of happiness and the American Dream drove progress and innovation, but they came with unintended side effects. In many cases, for instance, healthy ambition has morphed into avarice. Urbanization and an emphasis on large-scale businesses means fewer and fewer kids are learning about work in the natural course of family life.”[11]

One final thing to point out from the crux of the argument, Denmark is not a socialist country. For one thing, socialism is, as Business Dictionary notes, “a national financial system based on the public or cooperative ownership and administration of primary production capabilities […] economic systems typically employ central planning and use accounting systems based on the labor hours expended in production.”[12]

That same report I used for the cultural differences, their report also said the following regarding the Nordic model. “a straw man version of the Nordic model. This is the perception of the Nordic model as a socialist experiment with stifling taxes and heavy-handed regulation where paternalistic bureaucrats decide the fate of citizens from cradle to grave. Presumably such a model is neither efficient nor desirable on other grounds. […] Clearly the straw man version of the Nordic model needs to be amended.[13] And to further the point home, Danish Prime Minister Lars Løkke Rasmussen said last year speaking at Harvard’s Kennedy School of Government said the following: “I know that some people in the US associate the Nordic model with some sort of socialism. Therefore I would like to make one thing clear. Denmark is far from a socialist planned economy. Denmark is a market economy. The Nordic model is an expanded welfare state which provides a high level of security for its citizens, but it is also a successful market economy with much freedom to pursue your dreams and live your life as you wish.”[14]

With that out of the way, onto the main points!

1. Indeed, this is true, and can be thought of as a basic unemployment insurance plan. While the rates vary by state, “workers can collect the payments for as long as 99 weeks in states with the highest unemployment rates”[15] This is 5 weeks less than the up to 2 year period for the Danish. “States determine the amount of the benefits, but they average 36 percent of the average weekly wage, according to the National Employment Law Center.”[16], so they also got the “keep 90% of their salary” going for them. But has such a policy worked for the United States, given our limitations compared to our Nordic counterparts? I see the good intention of unemployment benefits, giving people something to live on while they search for another job. Working on 6 months of data before and after extended unemployment benefits were cut in early 20145, Jeffrey Dorfman writes that “In the six months before ending the extended unemployment benefits, total employment increased by 511,000. In the six months after the benefits stopped, employment rose by 1,635,000. That means employment gains were three times as fast after ending the extended benefits. It also translates into over one million more people working than if the trend from the previous six months had continued. […] Clearly, ending extended unemployment benefits did not cause a surge of people to give up and stay at home on the couch. Rather, we went from about 350,000 per month leaving the labor force to only 50,000 per month. At the same time, job gains went from 85,000 per month to over 270,000.”[17]

Even then, the benefits would have cost $26 billion per year according to the Congressional Budget Office[18], and BLS data shows that the long-term unemployment is still high, despite the extension of benefits.[19] This also contributes to systemic long-run unemployment, whereas the longer workers remain out of the job market, their skills become obsolete and the likelihood of remaining unemployed increases.[20] So how can this be fixed? A more effective approach for workers would be the establishment of Unemployment Insurance Savings Accounts, funded by a percentage of wages contributed by the employee and employer.[21] But there can be a positive spin on this too, specifically with the introduction of personal savings. As Tim Harford argues, “those without their own cash reserves are using unemployment benefits to buy themselves time to find the right job.”[22] The working age population statistics are correct to a degree, but behind the backdrop of why the numbers are the way they are tells a different story. As of last month, the U-3 unemployment rate (the “official unemployment rate”) was 5.0%. This sounds like a good thing, only until you look at the U-6 unemployment rate, which includes the total unemployed, plus all persons marginally attached to the labor force, plus total employed part time for economic reasons, as a percent of the civilian labor force plus all persons marginally attached to the labor force at 9.7%, almost double the “official” rate.[23] That brings us to our other number, the Civilian Labor Force Participation Rate. Currently, it is 62.8%, somewhat close to the 67% of Americans having jobs claim from the article.[24] But looking at things historically, this is also close to the same rate as we had in early 1979 and since 2009, an astonishing 13,128,000 people have left the work force.[25] As Kevin Ryan points out, “The bottom line is that fewer people are working today than at any point in at least a generation. This lower labor force participation rate presents a challenge for underfunded programs like social security, which rely on new workers to pay into it so that retiring baby boomers can receive their promised benefits, and hinders economic growth.”[26] For youth workers, this is a problem, also coupled by the policy of the minimum wage. I won’t go much into it, but a summary from the Employment Policies Institute is worth reading. “High minimum wage rates lead to unemployment for teens. One of the prime reasons for this drastic employment drought is the mandated wage hikes that policymakers have forced on small businesses. Economic research has shown time and again that increasing the minimum wage destroys jobs for low-skilled workers while doing little to address poverty.”[27]

2. The taxpayer-funded universal healthcare part I will not get into due to the topic being too long. But I will say that it might (there is a bit of controversy on the topic) economically feasible for a small country like Denmark, but not so for a large country for the United States. Nonetheless, it is true that Denmark does on average pay around $3,000 less per capita on healthcare, but they are still above the OECD average (as a share of GDP)[28], and the numbers themselves are outdated. As of 2014 (the latest World Bank data available), Denmark’s per capita spending is $6,463[29] while the United States’ per captia spending is $9,402[30], still nearly $3,000 apart per capita. the OCED also reports that in 2013, Denmark “saw increases in private spending, as user charges increased slightly for certain health care services and goods.”[31] It’s also worth noting that Denmark spends only 11.2% of its GDP on healthcare[32], while the United States spends 17.9% of GDP on healthcare.[33] Even though Denmark has a public health system, it is run in such a way that there is less national involvement, and is more efficient. It’s not necessarily a larger government causing the better outcome, but more a more efficient system based more on local and regional needs.

3. Its ironic, seeing how the world’s happiest nation is Europe’s second most nation that takes antidepressants behind Iceland.[34] “Life satisfaction and income are highly correlated both across country averages and across individuals within a country, as pointed out in a comprehensive study of the literature by Betsey Stevenson and Justin Wolfers,” explains Otto Brøns-Petersen. “Furthermore, as pointed out by Christian Bjørnskov, a high level of trust also seems to increase life satisfaction, and, as Danes are quite trustful, that might play a role here too.”[35]

4. A good work-life balance is a good thing, but that mostly has to do with the Danish culture. If by “work-life balance” they mean the number of hours worked, Demark is in the #2 spot with, according to the same CNN article, the Netherlands coming out on top with an average of 29 hours a week[36], who is also part of the OECD.[37] The average US worker putting in an average of 47 hours a week is an incomplete number, as it only counts for full-time jobs. The same Gallup source notes that “Part-time workers have averaged about 20 hours per week less than full-timers.”[38] Last year, according to BLS statistics, the United States had 148,833,000 people employed, of which 121,492,000 (81.63%) were full-time and 27,341,000 were part-time (18.37%).[39] In other words, the “47 hours a week” does not count nearly 20% of American workers. Nonetheless, the average hours for American workers as a whole is 38.6 hours[40], close to the 37 hours a week the Danish have, not to mention that there is no government-mandated work week in Denmark. And think of the bigger picture here. The average annual worked by Americans dropped from 1983.7 in 1950 to 1764.5 in 2014.[41] Furthermore, productivity has gotten way better over the past 60 years, according to data gathered by Erik Rauch, “An average worker needs to work a mere 11 hours per week to produce as much as one working 40 hours per week in 1950.”[42] The stats on paid vacation are true for Demark, but a little mixed for the United States. 77% of workers get paid vacation per year, going off of 2012 numbers, of which depend on how long you’ve worked for the company, be it 1 year (10 days), 5 years (14 days), 10 years (17 days), and 20 years (20) days.[43] Costs are indeed rising due to inflation, you can blame the Federal Reserve for that one. And one could say wages are stagnating, but they aren’t the only things to look for because they leave out benefits in wage statistics. Since 2000, wages and benefits have gone up 66% while inflation went up 39%. Even if we adjust for the median rather than the mean, compensation growth (49%) still exceeds inflation (39%).[44]

5. Indeed, the cost of college has become more expensive in the United States within the past 30 years, mostly due to inflation and government loans. Using the example of the University of Pennsylvania, tuition in 1950 was $600 a year[45], while last year, it was $43,838 a year[46], showing a percentage increase of 7,206.33%! That $600 a year education in 1950 would cost $5,996.34 a year when adjusted for inflation[47], which is not that bad, all things considered. The other thing is government aid. If you artificially inflate demand for something and don’t let supply adjust, prices will go up, and this holds true for college education. Last year, the Federal Reserve Bank of New York reported that “institutions more exposed to changes in the subsidized federal loan program increased their tuition disproportionately around these policy changes. […] The point estimates indicate that increases in institution-specific subsidized loan maximums lead to a sticker-price increase of about 60 cents on the dollar, and that increases in the unsubsidized loan and Pell Grant the per-student maximums are associated with sticker-price increases of 15 cents on the dollar and 40 cents on the dollar, respectively.”[48] Richard Vedder writes that “From 1910 to about 1978, tuition fees rose about 1% a year adjusting for inflation–explainable by the Baumol Effect (colleges are a service industry with no opportunity for productivity advance). Since 1978, fee increases have over doubled, to closer to 3% a year, reflecting the enormous growth in student loan and grant programs.”[49] Pascal-Emmanuel Gobry also informs us that “US colleges that don’t accept Federal loans have tuition roughly half of their similarly-ranked peers.”[50] And what about other factors? Washington Monthly reports that “over the same period [1975-2005], the faculty-to-student ratio has remained fairly constant. […] In 1975, colleges employed one administrator for every eighty-four students and one professional staffer, admissions officers, information technology specialists, and the like, for every fifty students. By 2005, the administrator-to-student ratio had dropped to one administrator for every sixty-eight students while the ratio of professional staffers had dropped to one for every twenty-one students.”[51] The Wall Street Journal reports that within American higher education, “the number of employees hired by colleges and universities to manage or administer people, programs and regulations increased 50% faster than the number of instructors between 2001 and 2011, the U.S. Department of Education says. It’s part of the reason that tuition, according to the Bureau of Labor Statistics, has risen even faster than health-care costs.”[52] The article also notes that colleges compete by offering “fancier dorms, dining halls, gyms and other amenities, to raise their rankings and attract students.”[53] For Denmark, school can cost some people money, specifically those outside the European Union, and if you 1), do not have a permanent or temporary residence permit or 2), do not have a residence permit as the accompanying child of a non-EU/EEA parent holding a residence permit based on employment. If that is the case, actual tuition can range between $8,000 and $21,000 USD.[54] Moreover, independent education has a long tradition in Denmark, where “the free choice of school and education is of central importance to a well-functioning education system. Apart from the fact that it is a goal in itself to give the students a free choice, a free choice of school and education will also further the schools’ initiative and industry.”[55]

6. In a previous point also, it was mentioned that Danes earn an average of $46,000 USD annually. This is true, to an extent. But what about adjusting it for GDP per-capita PPP (Purchasing Power Parity), that is, “the total adjustment that must be made on the currency exchange rate between countries that allows the exchange to be equal to the purchasing power of each country’s currency”?[56] When looking on GDP per capita when not adjusting for PPP, Denmark is in the lead, with a GDP per capita of $58,207.90 US dollars in 2015[57], while the United States had one of $51,486 that same year.[58] Adjusting for PPP, however, shows the United States has a GDP (PPP) per capita of $52,549.01 in 2015[59], while Denmark’s GDP (PPP) per capita is $43,415.23.[60] Furthermore, using the Geary–Khamis dollar (the 2000 international dollar, as it were)[61], data from the World Bank[62], International Monetary Fund[63], and CIA[64] show in all cases, the GDP (PPP) per capita is anywhere between 9,202 and 10,500 International dollars higher in the United States than in Denmark. The taxation system should also be taken into account, although to keep it simple, I’m going to only use federal income taxes (as if someone was single, not married) at the average income for each country in their respective currency (not adjusting for PPP). Average wages are 38,957.98 krone a month[65], which is $5,873.85 USD[66], or 467495.76 krone annually, which is $70,408.08 USD[67] Again, not adjusting for PPP. In the United States, the average annual wage is $44,510.[68] Now for the taxes! For Denmark, there is a tax that is due before the income tax is due, and that’s at 8%.[69] Then there is the income tax, which is 55.8%[70] making the total 239,993.623 krone, or $36,103.57 USD. The tax rate for the average annual wage for Americans is 25%[71], so the American take home pay would be $33,382.50. Not that big of a difference, to be sure, but once again, I’m not adding PPP here. Just simple currency conversions. And then there’s the standard of living, where Daniel Mitchell notes that data from the OECD and the Danish Finance Ministry “suggest that Americans enjoy higher living standards than their Danish counterparts.”[72] Even Danish-Americans have a higher standard of lving here in the United States than the Danes in Denmark, by about 55% higher.[73] Cost of living is a give and take, where consumer prices, restaurant prices, and local purchasing power are higher in Denmark by 15.72%, 47.11%, and 1.49% respectively. On the flip side, rent prices and grocery prices are lower in Denmark are by 11.21% and 10.88% respectively.[74]

7. By who’s definition are we defining poverty? Each country has their different poverty threshold, but we can go to some international figures to show a different side of the story. The Human Poverty Index shows that Denmark’s poverty rate is 8.2% while the United States’ poverty rate is 15.4%.[75] Moreover, the OECD reports that the United States has a poverty rate of 17.3%[76], while Denmark has theirs at 6.1%.[77] Also, one look at some of the possessions Americans living in poverty own confirms the idea that they are indeed wealthy compared to people in other countries.[78]

8. And why is the US raked #18 for best country for business? Looking at the updated 2015 version, we find that Denmark is still #1, but the United States is ranked #21, worse than what it was the year before. The United States “scores poorly on monetary freedom and bureaucracy/red tape [among other factors]. More than 150 new major regulations have been added since 2009 at a cost of $70 billion, according to the Heritage Foundation. […]” while Denmark scores “particularly well for freedom (personal and monetary) and low corruption. The regulatory climate is one of the world’s “most transparent and efficient,” according to the Heritage Foundation.”[79] Furthermore, the corporate tax rate is more business friendly in Denmark than it is in the United States, with Denmark having a corporate tax rate of 23.5%[80] while the United States has theirs at 38.9%[81] But how can they tax so much and get away with such prosperity? Henrik Kleven explains that “far-reaching information trails that facilitate tax compliance, broad tax bases that limit the scope of legal tax avoidance, and large public spending focused on complements to work” are the key to success.[82] However, Kleven also tells us that “social and cultural factors may make it easier to enact these kinds of policies, and in turn the social and cultural norms may themselves be driven by the design of policies and institutions, and warns that “replicating the Scandinavian policies and institutions in societies that are fundamentally different is unlikely to be achievable or perhaps even desirable. The point is instead for countries everywhere to think carefully about how to collect taxes and redistribute income with less distortion from tax evasion, tax avoidance, and reduced labor supply, and the Scandinavian experience may provide ideas on how to expand the conversation about these important questions.”[83]

9. Wrong, new American parents do get something. Only when its not mandated by the government is it “nothing”. Ernst & Young leads the nation with 39 weeks of maternity leave[84], while Reddit leads with 17 weeks for paternity leave.[85] 12% of American workers get paid family leave, something that has improved over time from the 1% in 1992.[86] The Cato Institute wrote a paper back in 1988 highlighting some concerns with then-proposed family-leave legislation. They concluded that “the presence of more women in the labor market could bring about more employer discrimination against women. Women with education, experience, and training would fare better than those without. Women beyond childbearing age would fare better than younger women. And single women would benefit at the expense of married women.”[87] Claire Cain Miller of the New York Times made an analysis of 22 countries that implemented a maternity leave policy, and found that they hurt women it was trying to help, and even if they had jobs, they’d be “dead-end” positions and less likely to be managerial posts. Moreover, women are slightly more likely to stay employed, but receive fewer promotions because of the American Family Medical Leave Act (and that just counts for is just unpaid leave!).[88] Even then, studies have shown that it can help, but it can also harm too. Expanding the length of the woman’s maternity leave [from 18 to 35 weeks] does not add additional benefit to the child’s welfare nor does it create benefits for the couple’s marriage[89]. On a somewhat positive note, maternity leave does somewhat give the added benefit of improved child development, however, “the estimates suggest a weak impact of the increase in maternal care on indicators of child development.”[90] Although maternity leave does not cause an adverse interruption of a woman’s career progression at first glance, it doesn’t help that an extended maternity leave does cause human capital depreciation for women, which still causes some issues for a woman’s career projections in the long term. The OECD reports that “women who make full use of their maternity or parental leave entitlements receive, on average, lower wages in the years following their resumption of work than those who return before leave expires [and] can permanently damage [mothers’] ability to achieve their labor market potential.”[91] “It’s all about demand, supply, and prices,” notes Nita Ghei, senior policy research editor for the Mercatus Center at George Mason University. “When the after-tax wage (the “price”) increases, more women will be willing to work (that is, supply increases). A tax cut has the advantage of not increasing costs to employers, so there is no decrease in demand, as there would be with a mandated paid leave provision.”[92] “To acquire and retain quality employees,” says Laurence Vance, “most employers offer employees a variety of fringe benefits, vacation pay, sick leave, paid time off, holiday pay, jury-duty pay, child care, and discounts on food, merchandise, or services, none of which is mandated by the government. It should be no different with family leave. Whether an employer offers it, whether it is paid or unpaid, and what the length of it is, is a matter to be settled by agreement between the employer and employee.”[93]

The last part of the article ends questioning the reader if this “socialist” dream is “an ideal vision of what Americans could have if we came together and demanded it from our government?” Here’s the thing, it could very well be what Americans could have, this is indeed possible. However, there is a catch, two, in fact. The first, is that to achieve these lofty goals, an amendment to the Constitution must be put fourth. One might argue (wrongfully) that this is Constitutional because, according to Article I, Section 8, Clause 1 of the Constitution which says Congress “shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States”. And that last portion of the clause “provide for the common Defence and general Welfare” is also in the preamble, but even then, the Preamble is merely the intention of the overall document and has no legal binging. As Justice Harlan noted in Jacobson v. Massachusetts, “Although that Preamble indicates the general purposes for which the people ordained and established the Constitution, it has never been regarded as the source of any substantive power conferred on the Government of the United States, or on any of its Departments. Such powers embrace only those expressly granted in the body of the Constitution.”[94] Furthermore, constitutional scholar Robert Natleson notes that the preamble “is merely declaratory of a limitation the Founders believed inherent in free government and does not have force beyond that.”[95] But even in the body of the constitution where the “general welfare” is mentioned, it still is not a blanket government-can-do-whatever-it-wants power, for if it were that, the rest of Article I, Section 8 would not be needed. Early Constitution commentator noted the following is his commentary on the Constitution. “A power to lay taxes for any purposes whatsoever is a general power; a power to lay taxes for certain specified purposes is a limited power. A power to lay taxes for the common defense and general welfare of the United States is not in common sense a general power. It is limited to those objects. It cannot constitutionally transcend them.”[96] Natleson further explains that “the General Welfare Clause was an unqualified denial of spending authority. It did not add to federal powers; it subtracted from them. The General Welfare Clause was designed as a trust-style rule denying Congress authority to levy taxes for any but general, national purposes.”[97]

And what better way to clear up this ambiguity best with the words of James Madison, the father of the Constitution. In Federalist 45, written in 1788, James Madison explained the delegated powers in the Constitution as ““few and defined. Those which are to remain in the State governments are numerous and indefinite. The former will be exercised principally on external objects, as war, peace, negotiation, and foreign commerce; with which last the power of taxation will, for the most part, be connected. The powers reserved to the several States will extend to all the objects which, in the ordinary course of affairs, concern the lives, liberties, and properties of the people, and the internal order, improvement, and prosperity of the State.”[98] In 1792, Madison wrote to Henry Lee about Hamilton’s Report on Manufactures, a reaction, if you will. He summed up his views on the General Welfare clause when he penned the following. “The federal Govt. has been hitherto limited to the Specified powers, by the greatest Champions for Latitude in expounding those powers. If not only the means, but the objects are unlimited, the parchment had better be thrown into the fire at once.”[99] Madison’s last veto message in 1817, one on the Internal Improvements Bill demonstrated why he rejected the bill on constitutional grounds. “To refer the power in question to the clause “to provide for the common defense and general welfare” would be contrary to the established and consistent rules of interpretation, as rendering the special and careful enumeration of powers which follow the clause nugatory and improper. Such a view of the Constitution would have the effect of giving to Congress a general power of legislation instead of the defined and limited one hitherto understood to belong to them, the terms “common defense and general welfare” embracing every object and act within the purview of a legislative trust. “[100] Even in retirement, he wrote the following in 1831. “With respect to the words “General welfare” I have always regarded them as qualified by the detail of powers connected with them. To take them in a literal and unlimited sense, would be a metamorphosis of the Constitution into a character, which there is a host of proofs was not contemplated by its Creators.”[101]

To be sure, this is only one founding father, but as Natleson has mentioned, this was something agreed on by the majority of the founders. Of those on the dissenting opinion, Hamilton seems to be the only one of my knowledge that rejected this idea. And even furthermore, if an amendment were to get passed, I wish you luck. In 225 years, 11,623 constitutional amendments were proposed, and only 27 were accepted, a success rate of 0.23%.[102] And amendment you want to instill the Nordic welfare model into the US would have to go through either two-thirds of the House and Senate approving the proposal or a convention called by two-thirds of the legislatures of the States. Either way, it still has to get the approval of three-fourths of the states before it becomes part of the Constitution. With that being said, the best way these programs could work, given population sizes, is on the state level. But even then, state constitutions would need to be amended if this were to be done.

In conclusion, although it is true that Denmark does provide some services from the government that the United States does not, they do so mostly at the municipal and regional (equivalent to our local and state) governments, rather than at the nation (our federal) level. Denmark recognizes the benefit of decentralization and, as a result, they manage to beat the United States in economic freedom. Denmark’s government is much more efficient, and it utilizes the equivalents of our county and state governments, whereas this article seems to want all of these things under Federal control, a pipe dream that simply doesn’t stand up to the facts.

[9]: Ibid.
[16]: Ibid.
[21]: Ibid.
[26]: Ibid.
[53]: Ibid.
[83]: Ibid.

The 400 Million Dollar Question

Did the Obama Administration give $400 million dollars to Iran in exchange for hostages? In a word, no. Let me explain. The $400 million that the United States sent to Iran is part of a larger installment of $1.7 billion[1], of which $1.3 billion is interest.[2]

The money was not for the 4 American hostages freed the day after, that was a separate deal (and negotiating team)[3] that lasted for about a year, one that involved five Americans, including Pastor Saeed Abedini and Washington Post journalist Jason Rezaian in exchange for 7 Iranians, six of whom are dual American-Iranian citizens, even though none of them have as of yet returned to Iran.[4]

The $400 million sent to Iran was their money to begin with. Before the Iranian revolution in 1979, the government under Shah Reza Pahlavi requested and paid for some U.S. fighter jets.[5] The jets were never delivered, and all Iranian assets were frozen during the hostage crisis. The Iranians didn’t get what they paid for and wouldn’t be refunded. The Algiers Accords in 1981 sought to settle some financial disputes between the two countries, but did not resolve everything. The Iran–United States Claims Tribunal was then set up to resolve these other issues, to which Iran wanted $10 billion for the original $400 million dispute.[6]

“U.S. officials had expected a ruling on the Iranian claim from the tribunal any time, wrote Associated press reporter Matt Lee, “and feared a ruling that would have made the interest payments much higher.”[7] This deal wasn’t secret either. Obama discussed it back in January, saying that “for the United States, this settlement could save us billions of dollars that could have been pursued by Iran. So there was no benefit to the United States in dragging this out. With the nuclear deal done, prisoners released, the time was right to resolve this dispute as well.”[8]

And the United States has also benefited from the Claims Tribunal during its first 20 years, to the tune of $2.5 billion in awards to U.S. nationals and companies.[9] Both the 35-year-old settlement and the hostage situation ended up being resolved just a day apart, hitting two birds with one stone. Whether or not it was a ransom payment, the Iranians would have gotten the money no matter what.

“Iranian press reports have quoted senior Iranian defense officials describing the cash as a ransom payment”, noted the Wall Street Journal in the article that started the entire controversy, albeit 7 months late.[10] Reportedly, senior Justice Department officials objected to paying the $400 million over concerns that the Iranians would consider it a ransom payment, sending some wrong signals.[11] But that’s just Iran being Iran, trying to look strong to their domestic audience.

Nonetheless, I do believe that the actual deal could have gone through more legal channels. As Andrew C. McCarthy puts it, “The law on which the anti-terrorism sanctions are based gives the president broad waiver discretion. [Obama] could have issued a waiver in order to enable our government to pay Iran.”[12] Not to mention that I am also concerned about where the money would actually go, seeing that Iran is a state-sponsor of terrorism. Where it would end up though, I have no idea.

But in the end, using the words of acting director of the Future of Iran Initiative at the Atlantic Council Barbara Slavin, “It was an opportunity for countries with no diplomatic relations to clear away a number of diplomatic disputes. For the U.S, it was important to get back the detained Americans, and the Iranians wanted their seven citizens out of jail.”[13]
















Its Time to Audit the Pentagon

A Department of Defense internal review by its Inspector General shows that “the Office of the Assistant Secretary of the Army and the Defense Finance and Accounting Service Indianapolis did not adequately support $2.8 trillion in third quarter journal voucher [a written authorization prepared for every financial transaction, or for every transaction that meets defined requirements] adjustments and $6.5 trillion in yearend JV adjustments made to AGF [Army General Fund] data during FY 2015 financial statement compilation. […] In addition, DFAS Indianapolis did not document or support why the Defense Departmental Reporting System‑Budgetary, a budgetary reporting system, removed at least 16,513 of 1.3 million records during third quarter FY 2015.”[1]

In other, non-technical words, the Pentagon has not adequately accounted for $6.5 trillion for FY 2015 (and perhaps even further back) and are missing at least 16,513 for the third-quarter of FY 2015 alone.

In 2013, Reuters found out that “the Pentagon is largely incapable of keeping track of its vast stores of weapons, ammunition and other supplies; thus it continues to spend money on new supplies it doesn’t need and on storing others long out of date. […] A review of multiple reports from oversight agencies in recent years shows that the Pentagon also has systematically ignored warnings about its accounting practices.”[2]

Back in 1996, Congress passed a law that would audit every federal agency[3] and in 2009, Congress said they would ensure that “the financial statements of the Department of Defense are validated as ready for audit by not later than September 30, 2017″[4] To date, the Pentagon/Department of Defense is the only agency that has failed to be audited. That the $8.5 trillion dollars in taxpayer money given to the Pentagon since 1996 has never been accounted for, at least in full.

There have been some small things that came up over the years, some of which the Fiscal Times compiled into a list last year. Spending $1 billion to destroy $16 billion worth of ammo it didn’t actually need. Not keeping tabs on $300 million to help fund the payroll of the Afghan National police. Failure to track $500 million worth of military equipment given to Yemen since 2007. Overcharged $1 billion by federal contractors (who dodged work hours and neglected safety requirements) for loose bolts and damaged aircraft. Spending $900 Million more than estimated on naval ships.[5]

I can go on about the wasteful spending, but in the end, I want to offer a solution. The Pentagon needs to be audit ready by September 2017, as I previously mentioned. If they aren’t, then I suggest that the Congress should not accept any new funding requests by the Pentagon until an audit is at least started. The Pentagon can get by with the $585 billion they requested for FY 2016. A small price to pay for 20 years and $8.5 trillion of unaccounted funds.


The Drug War: A Short Social History

Historically, the War on Drugs began in the United States with the passing of the Harrison Narcotics Tax Act in 1914.

The opium problems plaguing the Far East encouraged Secretary of State William Jennings Bryan to pass the act under the pretenses of fulfilling the 1912 International Opium Convention treaty, the main point of why the bill was created.[1] The act laid out the licensing and taxing of opium and other related products, along with a provision that allowed registered physicians to prescribe these kind of drugs “in the course of his professional practice only”[2] This was interpreted by law enforcement that a doctor could not prescribe opioid related products to an addict to maintain his addiction. As a result, physicians were targeted and imprisoned by police, eventually leading to an underground market and criminal acts.[3]

Six weeks later, the New York Medical Journal noted the following: “The immediate effects of the Harrison anti-narcotic law were seen in the flocking of drug habitués to hospitals and sanatoriums. Sporadic crimes of violence were reported too, due usually to desperate efforts by addicts to obtain drugs, but occasionally to a delirious state induced by sudden withdrawal. The really serious results of this legislation, however, will only appear gradually and will not always be recognized as such. These will be the failures of promising careers, the disrupting of happy families, the commission of crimes which will never be traced to their real cause, and the influx into hospitals to the mentally disordered of many who would otherwise live socially competent lives.”[4]

Time and time again, the evidence that drug and narcotics laws don’t work has piled up over the years. In 1926, an Illinois Medical Journal concluded that “the Harrison Narcotic law should never have been placed upon the Statute books of the United States. It is to be granted that the well-meaning blunderers who put it there had in mind only the idea of making it impossible for addicts to secure their supply of “dope” and to prevent unprincipled people from making fortunes, and fattening upon the infirmities of their fellow men. […] The doctor who needs narcotics used in reason to cure and allay human misery finds himself in a pit of trouble. The lawbreaker is in clover. It is costing the United States more to support bootleggers of both narcotics and alcoholics than there is good coming from the farcical laws now on the statute books.”[5]

A decade later, former chief of police, professor of police administration, author, and president of the International Association of Chiefs of Police August Vollmer wrote the following: “Drug addiction, like prostitution and like liquor, is not a police problem; it never has been and never can be solved by policemen. It is first and last a medical problem, and if there is a solution it will be discovered not by policemen, but by scientific and competently trained medical experts whose sole objective will be the reduction and possible eradication of this devastating appetite. There should be intelligent treatment of the incurables in outpatient clinics, hospitalization of those not too far gone to respond to therapeutic measures, and application of the prophylactic principles which medicine applies to all scourges of mankind.”[6]

Sociologist and Indiana University Professor Alfred Lindesmith wrote in 1940 that “solemn discussions are carried on about lengthening the addict’s already long sentence and as to whether or not he is a good parole risk. The basic question as to why he should be sent to prison at all is scarcely mentioned. Eventually, it is to be hoped that we shall come to see, as most of the civilized countries of the world have seen, that the punishment and imprisonment of addicts is as cruel and pointless as similar treatment for persons infected with syphilis would be.”[7]

Rufus King, Esq., chairman of the American Bar Association’s committee on narcotics, summed up his own views in the 1953 edition of Yale Law Journal. “The true addict, by universally accepted definitions, is totally enslaved to his habit. He will do anything to fend off the illness, marked by physical and emotional agony, that results from abstinence. […] Drugs are a commodity of trifling intrinsic value. All the billions our society has spent enforcing criminal measures against the addict have had the sole practical result of protecting the peddler’s market, artificially inflating his prices, and keeping his profits fantastically high.”[8]

In 1957, Dr. Karl Bowman, one of America’s knowledgeable psychiatrists and authorities on narcotics, said that “for the past 40 years we have been trying the mainly punitive approach; we have increased penalties, we have hounded the drug addict, and we have brought out the idea that any person who takes drugs is a most dangerous criminal and a menace to society. […] Our whole dealing with the problem of drug addiction for the past 40 years has been a sorry mess.”[9]

“Just why the alcoholic is tolerated as a sick man while the opiate addict is persecuted as a criminal is hard to understand”, lamented biochemist Dr. Robert de Ropp in 1957.[10] A year later, a study of the problem of narcotics published by the joint Committee on Narcotic Drugs of the American Bar Association and American Medical Association declared that “stringent law enforcement has its place in any system of controlling narcotic drugs. However, it is by no means the complete answer to American problems of drug addiction. […] The very severity of law enforcement tends to increase the price of drugs on the illicit market and the profits to be made therefrom.”[11]

Not surprisingly, all of this conventional wisdom would be ignored under the War on Drugs created by Nixon.

After 40 years of the War on Drugs, drug use is rampant and violence even more brutal and widespread. Even the Director of the Office of National Drug Control Policy, Gil Kerlikowske, concludes that the War on Drugs hasn’t worked. “In the grand scheme, it has not been successful,” Kerlikowske told The Associated Press. “Forty years later, the concern about drugs and drug problems is, if anything, magnified, intensified.”[12]

And there are even greater unintended some effects to the War on Drugs, such as the militarization of anti-drug policies, which have not only affected the United States, but the rest of the world as well. The inkillings, kidnapping, rape and robbery is reaching levels comparable to the social effects of a civil war in many countries that have narco-mini-states.[15] The United Nations Office on Drugs and Crime has reported that “Reports of disturbed family life related to drugs are frequent in the literature”, but also noted that “while the family group can, under certain circumstances, be the origin of drug problems, it can also be a potent force for treatment.”[16]

Between 1970 and 2010, the American population has increased 51.76%, but despite this, the rate per 100,000 population of unintentional drug overdose deaths has risen 533.33% in a near-similar timeframe (1970-2007).[13] A similar story goes for the federal prison population, with about 200,000 people in federal prisons in 1970 to 1.57 million in 2013[14], nearly half of them (46.3%) convicted because of drug offenses.[17]

Even education programs, such as the Drug Abuse Resistance Education (DARE) program created in 1983, has drawn criticism concerning its effectiveness. A meta-analysis conducted in 2009 by statisticians Wei Pan Haiyan Bai showed that teens enrolled in the DARE program were just as likely to use drugs as were those who didn’t enroll.[18] But there are successful and unsuccessful ways on educating children about drugs that will have a long-lasting impact on their lives.

Pim Cuijpers of the Netherlands Institute of Mental Health and Addiction reviewed 30 studies of these successful programs and concluded that the most effective programs involve high interaction between instructors and students, teach them social skills to refuse drugs and practice the skills with other students, taking into account behavioral norms. Epidemiologist Melissa Stigler and her colleagues further noted that programs running for several years (rather than several months as the DARE campaign did) helped reinforce these lessons.[19]

A word of advice penned by John Trenchard and Thomas Gordon in their famous “Cato’s Letters” series could offer a bit of a remedy. “Let people alone, and they will take care of themselves, and do it best; and if they do not, a sufficient punishment will follow their neglect, without the magistrate’s interposition and penalties. It is plain, that such busy care and officious intrusion into the personal affairs, or private actions, thoughts, and imaginations of men, has in it more craft than kindness; and is only a device to mislead people, and pick their pockets, under the false presence of the public and their private good. To quarrel with any man for his opinions, humors, or the fashion of his clothes, is an offence taken without being given.”[20]





















[19]: Ibid.


Independence Day

To me, Independence Day is a reminder of our separation from the yoke that was The British Empire. It should be something celebrated and cherished, yet at the same time, we should be seeking out own independence from the yoke of an overreaching government.

“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” These are the words that embody the very soul of the American tradition of freedom, something that I personally cherish, as do many other Americans.

And then there’s the bittersweet aftertaste. For the first century of our nation’s existence, with some minor hiccups along the way, we followed these basic principles written in the Declaration of Independence, until it began to slip into the subconscious. Many of the grievances listed in the Declaration of Independence are relevant today;  “obstruction of the Administration of Justice, sending hither swarms of Officers to harrass our people, imposing Taxes on us without our Consent”. And I don’t blame the Obama administration for this, rather, I blame pretty much every president, in one degree or another, from Theodore Roosevelt down to today.

I can go on about this, enough to make dozens of blog posts even, but I’ll leave it at what I have above.

To the founders, the basic meaning independence meant, as noted by Samuel Johnson’s 1755 dictionary, “Freedom; exception from reliance or control; state over which none has power”[2]

The British St. James’s Chronicle noted in September 1776 that on July 11th, “the Deceleration of Independence was read at the Head of each Brigade of the Continental Army, posted at and near New York, and everywhere received with loud Huzzahs, and the utmost Demonstrations of Joy.”[]

Such “Demonstrations of Joy” today are shown thought the lighting of fireworks, but even then, that has been hindered (when I lived in California, for example, there was a $1,000 fine for the use of “prohibited” fireworks (i.e. Roman candles)).

Independence Day should be seen as an inspiration, as a historical example of how our founders paved the road to their freedom. Now we must do the same with ours.

[1]: Saint James’s Chronicle, September 1776

Justification for Terrorism?

This one is from another paper I wrote for my National Security class, specifically discussing the United States and state-sponsored terrorism (a controversial topic in and of itself), but this bit hit upon the controversial discussion of whether terrorism can or cannot be justified, and an example of this that we should all too familiar with. In light of the terrorist attack/mass shooting in Orlando and the attack in Turkey that recently occurred, I do not mean to justify them by any stretch of the imagination. Rather, I try to provide a scholarly basis on which terrorism MIGHT be justified. Such a position, I do not take myself.

“The literature on the justification for terrorism, especially related to state and state- sponsored terrorism is quite scant, primarily because the state, as an entity, has been thought of as the only group that can legitimately use violence in a legitimate sense, while terrorism is inherently illegitimate.[1] Nonetheless, there still has been some debate in academia of ethical and political research. Professor of Philosophy and Ethics J. Corlett notes that “targeting of the innocent violates the fundamental moral intuition that innocent persons ought not be targets or victims of violent physical attack. How, then, can it be morally justified? What possible role can terrorism have in society besides a negative one?”[2] He later goes onto state that “terrorism is morally problematic to the extent that it targets or results in the harming of innocents. […] But even if terrorism is unconcerned with the harming of innocent persons, it hardly follows from this supposition that terrorism must be directed at innocents. Indeed, most terrorist activity, whether morally justified or not, is aimed at a perceived wrongdoer or group of wrongdoers.”[3] Using Corlett’s definition of terrorism[4], only the Afghan mujahedeen’s use of terrorism in the Afghan-Soviet war is justified against the country’s Soviet occupiers [compared to the other terrorist funding ventures I mentioned in the paper]. Igor Primoratz argues on a similar line as Corlett, but for different reasons, specifically that “recourse to it [terrorism] may be morally permissible, if a people or a political community finds itself in extremis [in extreme circumstances], and terrorism is the only way out.”[5]

Even if a state finds itself in extremis, that still does not legitimize the taking of innocent lives in the process (per Corlett’s model), as the United States’ support for state-sponsored terrorism even went so far as to supporting terrorism within the United States. One example of this was Operation Northwoods, formulated by the Joint Chiefs of Staff to respond “to a request of [the Chief of Operations, Cuba Project] office for brief but precise description of pretexts which would provide justification for US military intervention in Cuba.”[6] The document, classified as “TOP SECRET SPECIAL HANDLING NOFORN”, called for, among other things “A series of well coordinated incidents will be planned to take place in and around Guantanamo to give genuine appearance of being done by hostile Cuban forces. […] blow up a US ship in Guantanamo Bay and blame Cuba. […] develop a Communist Cuban terror campaign in the Miami area, in other Florida cities and even in Washington. […] Hijacking attempts against civil air and surface craft should appear to continue as harassing measures condoned by the government of Cuba.”[7] The plan was rejected by Secretary of Defense Robert McNamara, and President John Kenndey, but nonetheless shows the willingness of top military brass to conduct acts of terrorism, even on American soil against American citizens.

[But] as a United Nations co-sponsored international counter-terrorism conference in Tunis concluded, “terrorism has no justification, no matter what pretext terrorists may use for their deeds.”[8] Terrorism can come in many forms, but none of them are justifiable, even if they might have some good intentions.”

[1]: Lamb, Melayna. “Can the Concept of State Terror Be Theoretically Justified?” E-International Relations. October 13, 2012. Accessed March 22, 2016.
[2]: Corlett, J. “Can Terrorism Be Morally Justified?” Public Affairs Quarterly 10, no. 3 (July 1996): 163-84. Accessed March 22, 2016.
[3]: Ibid.
[4]: “Terrorism is the attempt to achieve (or prevent) political, social, economic, or religious change by the actual or threatened use of violence against other persons or other persons’ property; the violence (or threat thereof) employed in terrorism is aimed partly at destabilizing the existing political or social order, but mainly at publicizing the goals or cause espoused by the terrorists or by those on whose behalf the terrorists act; often, though not always, terrorism is aimed at provoking extreme counter-measures which will win public support for the terrorists and their cause.”
[5]: Primoratz, Igor. “Terrorism Is Almost Always Morally Unjustified, but It May Be Justified as the Only Way of Preventing a “moral Disaster”.” EUROPP. August 29, 2013. Accessed March 22, 2016.
[6]: “Operation Northwoods.” Smeggy’s Forums. Accessed March 21, 2016.
[7]: Ibid.
[8]: “Terrorism Can Never Be Justified, Participants at Joint UN Conference Conclude.” UN News Center. November 19, 2007. Accessed March 21, 2016.

Solving the European Refugee Crisis: A Simple Guide

The refugee crisis in Europe could be solved, with or without the whole Brexit referendum that recently happened. And here’s a simple guide in doing just that.

#1: Stop funding and arming rebel groups attempting to overthrow the Syrian government. Since at least 2013 (and maybe back even further), the CIA has been funding and arming the Syrian rebels [1], even though the rebels themselves say that they haven’t even received a lot of the shipments.[2] Throughout 2013/2014, reports of war crimes from Syrian rebels emerged, and this was weeks, even months before more arms from the CIA started funnling.[3] In April 2014, Free Syrian Army commander Jamal Maarouf admitted tat his forces often conducts joint operation with Jabat Al-Nusra, Al-Queda’s branch in Syria.[4] This even goes further back to where, in June 2013, the FSA’s Northern Front commander, Colonel Abdel Basset Al-Tawil, admitted to working with Al-Nusra and wanted Syria to be ruled with shariah law.[5][6] Furthermore, in November 2013, FSA Colonel Abdul Jabbar al-Oqaidi said in an interview that relations between the FSA with ISIS were good and even supported them in the 2012-2013 siege of the Menagh military air base.[7] And its not just us. Our allies, Saudi Arabia and Qatar hitting the spotlight for a lot of it, have been supplying Syrian rebel groups with weapons, that eventually end up in the hands of ISIS and its affiliates.[8] This has caused nothing but chaos and destruction. Money being funneled into these shady operations should be immediately redirected to an intensive reconstruction effort.

#2: Pressure Turkey and Jordan to cut off supply routes that ISIS exploits for the use of oil and the flowing of foreign fighters (this especially is for Turkey) and withdraw foreign aid on a country that allows the sale of oil from ISIS territory and/or allowing money and materials to reach them.[9] In other words, break their supply chain, caused by the consequences of pipeline politics in Syria.[10]

#3: I say this one with some hesitation, but support the Syrian government. About five months of airstrikes against ISIS between August 2014 and January 2015 have only helped expand ISIS territory rather than decrease it.[11] And at least several former top military officials even admitted that the airstrike strategy isn’t working. Ret. Gen. Raymond Odierno, the former chief of staff of the Army argues that you can’t defeat ISIS without more boots on the ground and Bruce Riedel, a veteran CIA officer and terrorism expert says that traditional counterterrorism intelligence analysis don’t work, as ISIS infuses itself with civil wars in places like Syria, Iraq, Libya, Yemen, and Sinai .[12] And while I am inclined to agree that boots on the ground are needed, it should be Arab boots on the ground. ISIS in Iraq is nearly finished, this is true (they just need Mosul and several other small towns). But after Iraq is free of ISIS control, there is still Syria to be taken care of. We might not like Assad, to be sure, but the majority of the citizens of Syria support him. Actually, Assad has more support within his country than Obama or Congress have in America.[13] Any government installed after American backed regime change will be viewed as a puppet government by the locals, and will thus lack the legitimacy needed to stabilize the region. If one needs evidence of this, look no further than Afghanistan and Iraq.

#4: Provide assistance (preferably private assistance, the government does not manage its money very well) to rebuild housing, infrastructure and businesses destroyed by the conflict. This assistance should come in the form of temporary refugee camps, food, medical supplies, etc.

#5: Return the refugees to these regions once they are stabilized. Its in no one’s interest to flood Europe with refugees, only to cause economic problems to the European Union (which is already in a slump as it is)[14], and as a consequence, only strengthen xenophobic movements. These people don’t need to be sent into the ghettos of Europe, they need their homes back.