Myths and Facts About The Income Tax

Myths and Facts About The Income Tax
President Donald Trump’s tax cuts are going to take effect in a couple of days, and I wanted to dispel some common myths about taxes, both in general and in regards to this specific tax reform plan. This first post will be about the federal income tax. The next post will be about the corporate tax.

General myths
Myth: The rich don’t pay their “fair share” in taxes.

Fact: The definition of “the rich” and “fair share” is almost never mentioned, and these terms are just thrown around with no concrete numbers given in regards to what a “fair share” are.

Now, it is true that the rich will receive the the most benefits of any tax cut, but its because they pay the vast majority of taxes to begin with. One would think that everyone would understand this simple mathematical concept, but when, according to IRS data [remember, this looks at the amount PAID, so this accounts for any tax loopholes, deductions, or evasion], the top 50% of income earners pay more than 97% of all income taxes, and when the top 10% of income earners pay 71% of all income taxes[1][2], it’s literally impossible to decrease taxes without the rich getting the majority of the benefit. And while they do make much more income to start with, they still pay a far higher share of taxes than they make in income.

Furthermore, using data from the mid-2000s, the richest 10% in the United States pay far more of our country’s taxes [note that this is ALL taxes, not just income taxes] than other countries in the developed world.[3] Using CBO data from 2013, the top 20% of Americans pay 39.1% of their income [23.6% for the top 1% and 15.5% for the highest quintile] whereas the bottom 40% pay NEGATIVE 8.4% of their income [-7.2% for the lowest quintile and -1.2 for the second lowest quintile].[4]

For as politically scandalous as presidentional cannidiate’s Mitt Romney claim that 47% of Americans won’t vote for him because they don’t pay any income taxes, his observation that 47% of Americans don’t pay income taxes is correct. Indeed, going by 2015 tax law, MarketWatch uses data from the Tax Policy Center[5] that show that about 45.5% of Americans pay no income tax at all, and further confirms that the richest of Americans, the richest 20% of Americans in this calculation, pay 86.8% of the federal income tax, whereas the bottom 40% pay a NEGITIVE 3.9%.[6]


Myth: Income taxes on the rich were very high during the 1950s and 1960s and still brought economic prosperity.

Fact: It is true that the government had marginal income tax rates as high as 91-92% during the 1950s and 1960s, but this rate actually taxed people far less. First, it is important to illustrate the relevance of tax BRACKETS as opposed to tax RATES. In the United States, income taxes work by breaking your income into separate tiers and taxing different parts of it at different rates.

As explained by The Simple Dollar, “Let’s say you’re a single taxpayer who earns $35,000 per year. The first $9,275 of your income is taxed at 10%, and the remaining $25,725 is taxed at 15%. While $35,000 falls into the 15% tax bracket, your effective tax rate is actually 13.7%. The higher your income, the more tax brackets you pass through to arrive at your effective tax rate.”[7]

Tax brackets can be lowered, without changing rates, and more of people’s income can effectively be taxed in the process. Also, both the tax bracket and the tax rate for said bracket can be raised, but it taxes less because the bracket is so far removed from the majority of the people’s income and rarely applies. This is exactly what happened in the 1950s and 1960s.

As economists Daniel Baneman and Jim Nunns explain, in 1958 “there were 24 brackets [compared to 6 today] and the top rate was 91% [compared to roughly 35 percent today], but “only a SMALL FRACTION of returns were subject to [the higher] rates.” The truth is, almost no one paid these high tax rates of 91-92%. As Baneman and Nunns further explain, “in 1963, for example, only 501 returns [of 64 million filed] reported taxable income in the 91% bracket”[8] That’s only 0.00078% of all federal income tax returns, and it accounted for only 0.1% of total federal income tax revenue.

Furthermore, we couldn’t even replicate this system because the economy of the 1950s and 1960s was so different from our own. As Arpit Gupta explains, “the collapse of the global economy after World War II and the nature of postwar industrial capitalism, created a period of high corporate earnings in the United States. American firms did not vie then, as they do now, with competitors on every inhabited continent. […] Half a century later, the nature of global capitalism has drastically changed [such as how] the holders of capital have diversified [and] now include pension funds and ordinary investors.”[9]


Trump Tax Plan myths
Myth: The tax plan is a tax cut for the very rich and is “theft” from the middle class.

Fact: This is connected with the fact that the rich may MOST of federal income taxes anyways, and as I have stated earlier, it is logical that they are the ones who benefit the most from the tax plan. Despite this though, the middle class and the lower class are still going to get a tax cut. Per the actual numbers released by the Joint Committee on Taxation, both the the lower and middle classes [comprised of those earning less than $100,000 a year] are set to keep $62,797,000,000 of their own money in 2019 alone.[10]

It has been said that the middle class[11] is only going to be roughly 23% of the tax cuts on the income tax[12], however what is not mentioned is that the middle class only accounts for, as of 2017, 3.9% of total individual income tax payments.[13] Also note that in this data, further showing relevance to the point I made in regards to the first myth, the bottom 50% of Americans pay a NEGATIVE 7.6%, and the top 10% of income earners pay 80.9% of income taxes. The remainder people in the 50th to 80th percentile pays the remainder 26.9%.[14] In other words, the group that only contributes 3.9% of federal income tax was getting 23% of the benefit of these cuts, meaning that they are benefiting disproportionately relative to how much they pay into the income tax.


Myth: Many people’s taxes will increase as a reuslt of the plan.

Fact: The Tax Policy Center analyzed President Trump’s tax plan and found that 80% of all Americans, including 90% of the middle class, will see lower taxes from it, and that just 5% of Americans will see their tax bill grow, the largest portion of whom make over $1 million per year.[15]

[11]: Note: The definition of the “middle class” is ambiguous. The Joint Committee on Taxation looked at households with income between $20,000 and $100,000. The definition that is used in this analysis is households between the 30th and 70th income percentile, which provides 3.9% of total income taxes. A broader definition, consisting of families between the 20th and 80th income percentile, contributes 9.2% of all federal income taxes. The middle 50% of American wage earning households contributes approximately 6.5% of income tax in America. No matter which of these definitions you choose, they are all receiving disproportionately more benefit from this income tax cut than they contribute.
[14]: Ibid.


The Obligation of Establishing a Caliphate in Islam

With the announcement of the Caliphate three years ago by the Islamic State, and contemporary discussion of the institution of the Caliphate in recent years, I wanted to shed some light on what the religion of Islam said about the institution of the Caliphate.

In Islamic history, there was/is an institution known as the Caliphate, a man who led the entire Islamic community and governed the affairs of the Muslims after the death of Muhammad. Historically, this was a title reserved only for men, the only exception to this being Sitt Al-Mulk, the caliph of the Fatimid Caliphate between 1021 and 1023, and even then, her nephews took the formal title of caliph.[1] Even then, the position of caliph was only reserved for men based on a hadith from Muhammad when he heard news of Persia making the daughter of Khosrau as their Queen, to which Muhammad said, “Never will succeed such a nation as makes a woman their ruler.”[2] Indeed, it is argued that women are not deemed appropriate for the post of a judge or governor because the first four rightly guided caliphs and Muhammad did not appoint any into the position[3], although there is some interpretation on this, specifically coming from the Muslim scholar at-Tabari, who said that “it is permissible for a woman to be a judge regarding every issue because since it is permissible for a woman to become a mufti, it should be permissible for her to become a judge, too.”[4] I am getting too ahead of myself here on this topic, but for a general overview, see the link for the footnote #5 that I am placing here.[5]

Nevertheless, establishing a caliph under the concept of imamah (leadership) is mandated by Islamic law, as will be evidenced below.

First, in the Quran, the Arabic word for caliph/Caliphate is used in Chapter 2, Verse 30 of the Quran, “and [mention, O Muhammad], when your Lord said to the angels, “Indeed, I will make upon the earth a successive authority [caliph].” They said, “Will You place upon it one who causes corruption therein and sheds blood, while we declare Your praise and sanctify You?” Allah said, “Indeed, I know that which you do not know.”[6] And again in Chapter 36, Verse 26: “[We said], “O David, indeed We have made you a successor upon the earth, so judge between the people in truth and do not follow [your own] desire, as it will lead you astray from the way of Allah.” Indeed, those who go astray from the way of Allah will have a severe punishment for having forgotten the Day of Account.”[7] On the first verse, Al-Qurtubi said in his commentary on the Quran that “this [verse] is a basis for appointing a [leader] and [caliph] who is listened to and is obeyed, for the word is united through him and the [laws] of the [Caliph] are implemented through him, and there is no difference regarding the obligation of that among the Ummah, nor among the Imams except Abu Bakr al-Asamm, who was most deaf regarding the Shari’ah.”[8]

And in the hadith, Muhammad affirmed that the caliphate is one “of prophecy [that] will last thirty years”[9], going through the first four caliphs, and that, after a period of the Caliphate, a period of “harsh rule”, and period of “coercive rule”, then “there will be the [Caliphate] upon the way of the Prophethood.”[10] Further, the Caliphate “will remain among the Quraish [the tribe Muhammad came from] even if only two persons are left.”[11] It is also mandated in Islam that there must be a bay’ah (pledge of allegiance) to a ruler (caliph), for as Muhammad said, “Whoever died without an Imam he dies a death of jahilyyah [ignorance].”[12] He also said “when you are holding to one single man as your leader, you should kill who seeks to undermine your solidarity or disrupt your unity”[13] and “when an oath of allegiance has been taken for two caliphs, kill the one for whom the oath was taken later.”[14]

The scholars of Islam also were in consensus in mandating the Caliphate. Al-Juzayri sums up this consensus when he noted “the Imams [of the four madhabs] all agreed that the [Caliphate] is an obligation, and that the Muslims must appoint an [Caliph] who implements the rites of the [religion] and gives the oppressed justice against the oppressors”[15] Al-Ghazali said the Caliphate was “from the necessities of the shari’a that simply cannot be left”[16], while Al-Haskafi called it “the most important of obligations”[17] Similar statements were made by Al-Qurtubi[18] , Al-Amidi[19], and Ibn Taymiyyah.[20]

Ibn al-Mubarak said “Indeed the jama’ah is the rope of Allah, so hold on to its grip, firm for him who professes Islam […] If not for the Caliphate, paths would not be safe for us and the weak would be a source of pillage for the strong.[21] Qadi Abdul Jabbar said “the establishment of the Caliphate is compulsory upon the Ummah, because many of the obligations of Islam cannot be fulfilled without the Caliphate.”[22] Ibn Hazm stated that “all of Ahl as-Sunnah [Muhammad’s companions], all of Shia, all of Khawarij [except some from the Khawarij] have agreed on the obligation of Caliphate.”[23]

Furthermore, the Caliph is to rule by the law of Allah as set fourth in the Quran and Sunnah. Izz al Deen ibn Abdus Salaam said that “only Allah is to be obeyed exclusively. By the same token, no one has the right to rule or judge except Him, for His rulings are derived from the Quran and Sunnah.”[24] Likewise, Shihab al Deen al-Alusi said “There is no dispute in the kufr of anyone who doesn’t have the certainty to judge by what Allah sent down, and moreover, the general design for the denial of faith for anyone not judging by what Allah sent down is one of one class.”[25] Ibn Kathir said “[There are those who rule by] the law of the land to which they give precedence, above the Quran and the Sunnah. Whoever amongst them does this is a kaffir who must be opposed until he returns to the rule of Allah and his Messenger. Such a person should not even rule for a day.”[26]

The consensus of Muslim scholars on the establishment of the Caliphate is unanimus.[27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66]

Anyone who denies that the institution of the Islamic caliphate is mandated in the religion of Islam is grossly ignorant of 1,300 years of Islamic history and law on the matter.

[1]: Fatima Mernissi and Mary Jo Lakeland, The Forgotten Queens of Islam. Oxford University Press. (2003)
[4]: At-Tabari, Az-Zuhayli
[8]: Al-Qurtubi, Tafsir Al-Qurtubi
[10]: Musnad Ahmad
[12]: Musnad Ahmad
[15]: Al-Juzayri, Al-Fiqh ‘ala al-Madhahib al-Arba’a
[16]: Al-Ghazali, Al-Iqtisad fi al-I’tiqad
[17]: Al-Haskafi, Radd al-Muhtar
[18]: Al-Qurtubi, Al-Jami’ li Ahkam al-Qur’an
[19]: Al-Amidi, Ghayat al-Muram
[20]: Ibn Taymiyyah, Al-Siyasah al-Shar’iyyah
[21]: Ibn al-Mubarak. Hilyat al-’Awliya
[22]: Qadi Abdul Jabbar, Sharh al Usul al Khamsah
[23]: Ibn Hazm, Al-Fasl fi Al-milal Wa-al-ahwa’ Wa-al-Nihal
[24]: Izz al Deen Ibn Abdus Salaam, Qawa’id al-Ahkam
[25]: Shihab al Deen al-Alusi, Tafsir Rooh al-Ma’ani
[26]: Ibn Kathir, Tafsir Ibn Kathir
[27]: ‘Adud al-Din al-Iji, Sharh al- Aqa’id al-Nasafiyyah
[28]: Abu Bakr Al-Ansari, Durr al-Mukhtar
[29]: Abu Hafs Umar al-Nasafi, Al-Ahkam As-Sultaniyyah
[30]: Abu Ya’la al-Fara, Al-Jami’ li Ahkam al-Qur’an
[31]: Al Ahkam, As-Sultaniyyah
[32]: Al Qazi Azduddin Iji, Al-Sawa’iq al-Muhriqah
[33]: Al Shawkani, Al-Fiqh ‘ala al-Mathahib al-Arba’a
[34]: Ala’-Din al-Haskafi, Sawaa’iq al-Muhraqah
[35]: Al-Eiji, Al-Sawa’iq al-Muhriqah
[36]: Al-Ghazali, Hujjat Allahi al-Baligha
[37]: Al-Haramayn al-Juwayni, Sharh Sahih Muslim
[38]: Al-Haythami, Al-Muqadima
[39]: Al-Mawardi, Al-Mawaqif fi ‘Ilm al-Kalam
[40]: Al-Nasafi, Al-Mawaqif
[41]: Al-Nawawi, Al-Hafidh Zayn al-Deen Ibn Rajab al-Hanbali,Al-Iqtisad fi al-I’tiqad
[42]: Al-Qurtubi, Ghayatul Bayan
[43]: Al-Shahrastani, Al-Fasl fi Milal wa ‘l-Ahwaa’ wa ‘l-Nihal
[44]: Al-Shawkani, Al-Aqaid
[45]: Al-Taftazani, Al-Muqaddimah
[46]: At-Taher Ibn Ashour, Ghiyath al-Umam fi Tiyath al-Dhulam
[47]: Ibn Hajar al Asqalani, Al-Ahkam al-Sultaniyyah
[48]: Ibn Hajar al-Haytami, Sharh al-Aqa’id al-Nasafiyyah
[49]: Ibn Hajar al-Haythami, Sharh Sahih Muslim
[50]: Ibn Hazm, Al-Ahkam as-Sultaniyyah
[51]: Ibn Khaldun, Al-Sawaa’iq al-Muhriqah
[52]: Ibn Khaldun, Ghayat Al-Wusul Fi Sharh Lub Al-Usul
[53]: Ibn Taymiyyah, Kashshaf al-Qinaa’ ‘an Matn al-Iqnaa’
[54]: Imam al-Mawardi, Fath al-Bari
[55]: Imam al-Nasafi, Aqa’id al-Nasafiyya
[56]: Imam Mawardi, Al Muqaddimah
[57]: Jamaluddin al-Ghaznawi, Al-Aqa’id al-Nasafiyyah
[58]: Mansur al-Buhuti al-Hanbali, Nailul Authar
[59]: Mansur ibn Yunus al-Buhuti, Al-Mawaqif Fi ‘Ilm Al-kalam
[60]: Muhammad Amin Ibn Abidin, Kashshaf al-Qinaa
[61]: Sa’d al-Din al-Taftazani, As-Saw’iq Al-Muhtariqa
[62]: Sayf al-Din al-Amidi, Radd al-Muhtar
[63]: Shah Waliullah al-Dehlawi, Masaa’il of Muhammad bin Ishaq
[64]: Shah Waliullah al-Dehlawi, Usul An-Nitham Al-Ijtima’I Fil Islam
[65]: Shams al-Din al-Ramli, Al-Sayl al-Jarrar
[66]: Shamsuddin al-Ramli, Al-Siyasah al-Shar’iyyah and Majmu’ al Fatawah

The Islamic nature of the Islamic State

Is ISIS Islamic? No doubt, that question has been in the mind of many people since the establishment of the so-called caliphate, the first one in nearly a century. I’ve seen this question proposed many times, the majority of commentators giving the answer a negative, mostly using simple gestures and broad statements like “they’re not Islamic because the Quran forbids X” The only works that have punctured these broad statements and gone into a fair amount of detail, as far as I’ve seen, are very few in number (I am specifically referring to the “Open Letter to Baghdadi” published in late 2014 and “Refuting ISIS” by Shaykh Muhammad al-Yaqoubi, published in mid-late 2015, both of which I have read). Even then, I still had a lingering feeling about this issue and thought, “what does ISIS think about itself, how does it present its methodology, or its ‘aqidah (to use the Islamic term) and how does it view criticism from its enemies?”

And thus, the journey started. I started compiling offical materials from ISIS’s very own writings, and I have solely stuck to the ones that have been published in English. I may do some of their other foreign-language material (French, German, Urdu, etc.) in the future and update this study accordingly.

Many (but not all) of the sources that ISIS uses in its magazines are sources either in parentheses or brackets, or the occasional footnote that references a scholarly work, or a verse of the Quran, or a hadith (to give a general example).

When it comes to Dabiq and Rumiyah, the following are seperated in its own catagory for the sake of brevity:

1. Magazine footnotes
2. The “Wisdom” sections of Dabiq
3. The infographics from Rumiyah
3. The quotes at the end of each magazine
4. News
5. Reports
6. Military and Covert Operations

The following materials have been included:

1. Magazines published by Al-Hayat Media Center [I will seperate these by article/issue some time in the future]
-Dabiq (Issues 1-15)
-Rumiyah (Issues 1-12)
-The Murtadd Vote (stand-alone article)

2. Speeches by the “caliph” of ISIS, Abu Bakr al-Baghdadi
-Khutbah and Jum’ah Prayer in the Grand Mosque of Mosul
-A Message to the Mujahidin and the Muslim Ummah in the Month of Ramadan
-Even If the Disbelievers Despise Such
-March Forth, Whether Light or Heavy
-So Wait, Indeed We, Along with You, Are Waiting
-This Is What Allah and His Messenger Promised Us

3. Speeches by the first spokesperson of ISIS, Abu Mohammad al-Adnani
-This Is the Promise of Allah
-Indeed Your Lord Is Ever Watchful
-Say Die in Your Rage
-So They Kill and Are Killed
-O Our People Respond to the Caller of Allah
-Say to Those Who Disbelieve You Will Be Overcome
-That They Live by Proof

4. Speeches by the second spokesperson of ISIS, Abul-Hasan Al-Muhajir
-You Will Remember What I Have Told You
-Be Patient, for Indeed the Promise of Allah Is True
-And When the Believers Saw the Confederates

When all of the previously mentioned sources are tallied up, there are a total of nearly 4,200 references (including those found in the footnotes made by the editors of the magazines). Out of all of the references, the ideology of ISIS takes shape.

Muslim sources: 3,574 (85.76%)
-The Quran: 1,624 (38.97%)
-Hadith: 982 (23.56%)
-Muslim scholars: 884 (21.21%)
-Muslim historians: 50 (1.19%)
-Sirah: 26 (0.62%)
-Islamic Caliphs: 8 (0.19%)

Christian sources: 84 (2.01%)
-The Bible: 72 (1.72%)
-Christians: 7 (0.16%)
-The Pope: 5 (0.11%)

Group sources: 326 (7.82%)
-The Islamic State and its predessors (statements, magazines, spokespersons, etc.): 203 (4.87%)
-Al-Qaeda (groups, leaders, members): 45 (1.07%)
-The Muslim Brotherhood (leaders, documents, etc.): 42 (1%)
-Syrian Rebel Groups: 21 (0.5%)
-The Taliban: 15 (0.35%)

Other sources: 183 (4.39%)
-News sites: 99 (2.37%)
-Shia scholars: 42 (1%)
-Government officals: 27 (0.64%)
-Unknown: 8 (0.19%)
-Poets: 3 (0.07%)
-Military: 2 (0.04%)
-Dictionaries: 1 (0.02%)
-Non-Muslim historians: 1 (0.02%)

Total: 4,167

No doubt, as shown above, ISIS can be rightfully called the Islamic State (which is what I will call it henceforth). What some my claim is that the Islamic State takes its ideology from, when it comes to the variety of Muslim scholars, two primary sources; Ibn Taymiyyah and Muhammad Ibn ‘Abdil-Wahhab. While this is true to some extent, there are many other scholars that the Islamic State takes their ideology from, including the founders of the four Sunni schools of Islamic law.

But what does this mean for Islam? Surely, very few Muslims approve of the Islam taught and preached by the Islamic State[1], but this alone does not solve the issue. As Graeme Wood put it, “Muslims can reject the Islamic State; nearly all do. But pretending that it isn’t actually a religious, millenarian group, with theology that must be understood to be combated, has already led the United States to underestimate it and back foolish schemes to counter it. We’ll need to get acquainted with the Islamic State’s intellectual genealogy if we are to react in a way that will not strengthen it, but instead help it self-immolate in its own excessive zeal.”[2]

A Muslim friend of mine once agreed with me that Islam is in need of reform, much like Christianity did during the era of Martin Luther. Yet as it is reported in the hadith (the sayings and deeds of Muhammad), Muhammad said “Allah will raise for this community at the end of every hundred years the one who will renovate its religion for it.”[3] No doubt, the end of this century in the Islamic calendar (1499 A.H.) isn’t until 2076 A.D. But I fear that the Islamic State is that reform for the end of the 13th century Hijri.


The Seen and the Unseen: Lessons from the ADA

There are some pieces of legalization that have suffered the neglect of scrutiny in regards to their possible consequences. French author and economist Frédéric Bastiat noted that “between a good and a bad economist this constitutes the whole difference, the one takes account of the visible effect; the other takes account both of the effects which are seen, and also of those which it is necessary to foresee.”[1]

With this in mind, the Americans with Disabilities Act (ADA) of 1990 also has some seen and unseen effects. The seen effect is primarily the intent of the law; the prohibition of wage and employment discrimination against the disabled.[2]

But then there are the unseen effects, shown by two primary issues; employment and litigation.

Current Population Survey data shows that in the 1980s, the employment rate for men aged 18 to 64 with and without disabilities had little change save for a small dip due to the recession early on in the decade. During the 1990s, however, we see a shift. The employment rate for men with disabilities (in 1991-1992, 48.9 million people, about 19% of the population, had a disability while the 2010 census, showed that 56.7 million people, again, about 19% of the population had a disability)[3]) fell by 22% from 1989 to 2000 while that of men without disabilities dropped only 1%. Very little has happened with women with and without disabilities as well, the former dropping 1% in the 1990s.[4] Although the CPS suffers from a work limitation-based measure, the evidence from the CPS is corroborated by evidence from several other surveys.[5]

But what caused the major dip in the 1990s that wasn’t present in the 1980s? Much of the literature points to the ADA as being the culprit, although there are several technical problems that go along with this, specifically the definition of disability and an individual’s likelihood of reporting a disability when applying for a job. To summarize, it is a social stigma issue. The possibility of the destigmatization of someone labeling themselves as disabled when applying for a job, would artificially drive up reported employment rates for people with disabilities, likely as a last resort to find employment if other means lack desired results. Certainly, there needs to be more research on this topic.[6]

In any case, the most common reasoning for the ADA being the culprit lies on the fact that requiring employers to provide “reasonable accommodation” to people with disabilities, along with the fact that employees are prohibited to bear the costs of accommodations through measures like salary reductions, creates a disincentive to hire people with disabilities who might need accommodations.

Another explanation for the decline in employment, however, could also be the 1990-1991 recession that pushed a large number of people with disabilities out of work and onto SSDI [Social Security Disability Insurance] rolls.[7] It is difficult to deny that the expansion of SSDI benefits contributed to the 1990s employment drop, but how far its contributed is hard to come by, although both have played a role theoretically. Several other explanations have been offered by Tolin and Patwell, including that more people with disabilities went to college during the 1990s, thereby incentivizing academic achievement rather than work, and part-time employment increased to give a balance to the “reasonable accommodations” piece of the ADA.[8]

Nevertheless, unemployment has also went down in the 2000s and 2010s. Even in the more short-term, between 2009 and 2015, Americans with a disability that were employed [that are age 16 and over] slightly dropped from 19.2% in 2009 to 17.5% in 2015.[9] Granted, this was, in part, due to the recovery of the 2008 recession, but even then, the overall drop in labor force participation of disabled individuals between 1988 (50% participation rate) and 2014 (about 20% participation rate) is concerning.[10]

There has, however, been some argument that the ADA has done some positives in regards to employing disabled people. According to an MIT study, the ADA has increased job security for the disabled who are already employed, but has reduced employment prospects for those seeking out jobs.[11] Furthermore, while Acemoglu and Angrist do agree that “the ADA’s reasonable accommodation provision creates an incentive to employ fewer disabled workers, the introduction of hiring and firing costs complicates the analysis. If the threat of ADA-related litigation encourages employers to increase the hiring of the disabled and if the number of employers is not very responsive to profits or costs, the ADA may increase the employment of disabled workers as ADA proponents had hoped.”[12]

An average cost of $930 per disability accommodation since October 1992 has been measured, but “is likely to be an underestimate since it includes only voluntary accommodations, and there is no allowance for costs due to time spent dealing with ADA regulations and possible reduced efficiency due to a forced restructuring.[13]

The ADA has arguably become a litigation nightmare, for as Denise Johnson explains, “under Title III of the ADA, a plaintiff [the one who brings the case forward] doesn’t get damages, but is entitled to attorneys’ fees and costs and injunctive relief.” Many lawsuits in regards to the ADA are settled and because of the cost of litigation.[14] As early as 1995, for example, one federal judge said that a particular ADA case was “a blatant attempt to extort additional money.”[15]

Much of the hotbed for these ADA lawsuits originate in California, Hawaii, Illinois and Florida, and can have a severe impact on a local economy, and many small-business owners choose to settle with the plaintiff rather than drawing it up in court and assuring thousands in legal fees, along with the possibility of shutting down.[16]

Christopher Bell, a managing partner of the Minneapolis office of Jackson, Lewis, Schnitzler & Krupman, expressed concern about the litigation costs to employers resulting from the ADA, noting that while employers win over 92% of the ADA cases, it can cost an employer more than $150,000 to do so. Ann Reesman, the general counsel of the Equal Employment Advisory Council, concurred and said that courts dismiss many ADA employment cases because they do not meet the threshold requirements that the ADA sets. She also notes that it could cost an employer $50,000 to $100,000 in attorneys’ fees to have the court dismiss a claim.[17]

Yet in “over 90% of the nearly 20,000 ADA discrimination cases filed each year at the Equal Opportunity Employment Commission, the plaintiffs do not win”[18], but the definition of “winning is slight obscured as shows by Ruth Colker’s article, based on, in part, incomplete information thanks to inconsistent court publishing patterns, although from the information she pulled, ADA cases in the court of appeals overwhelmingly support the defendant 94% to 6%.[19]

Throughout all of this, I do not deny that people with disabilities face a variety of barriers in many areas of public life (the literature on this as far as I have seen is scant), but often times, unfortunately, its the disability rather than discrimination that most often creates these obstacles. I am not saying that it is the fault of the disabled for their condition, nor am I trying to downplay the victories that may have come because of the ADA.

I personally have a friend who has cerebral palsy (a disability that impairs mortar movement), and without the ADA, it would be harder for her to find a job (despite her high GPA and her level of self-motivation). I have no doubt that she is smart enough to land a good job in the future, yet there would likely be an able-bodied person of equal skill that would fill that position. The same could go for the other nearly 1,000 disabled students at her school, as well as the hundreds-of-thousands across the country.

But in any case, I realize that the law of unintended consequences still is in effect. On the 20th anniversary of the ADA, for example, Andy Imparato, the president of the American Association for People with Disabilities says that “there remain large obstacles when it comes to finding a job [because] despite the ADA, 70% of people with significant disabilities are not working today, the same as twenty years ago.”[20]

A more balanced approach on the ADA’s effect comes from Samuel Bagenstos, who wrote in 2004 that “on the now-standard economic analysis, the ADA will have a negative employment effect only if expected accommodations costs exceed the liability costs employers can expect to face if they discriminate against individuals with disabilities in hiring. […] [Yet] there are serious limitations to what the ADA can do in that respect. For very large numbers of people with disabilities, the principal barriers to employment are structural [and] these barriers operate well before an employer even has an opportunity to discriminate. An antidiscrimination law like the ADA simply cannot eliminate them. […] The evidence that the ADA has caused a decline in employment for people with disabilities cannot be ignored, but it is a long way from showing that the -statute’s ultimate effects will be perverse. […] Any meaningful effort to achieve empowerment and integration must both build on the ADA’s successes where the statute has been successful and find alternative policy tools where the statute has not been successful.”[21]






[5]: Ibid.

[6]: See the chapter by Blanck, Schwochau, and Song as well as the chapter by Kruse and Schur in The Decline in Employment of People with Disabilities: A Policy Puzzle, edited by Stapleton and Burkhauser for more information on this subject.

















Some Final Thoughts on Terrorism

Over the past considerable amount of posts, I’ve focused on a topic that has interested me for quite a while, terrorism. It was a big part of my National Security class I took last year and I though I would follow up on some interesting and hopefully insightful research. But now after some consideration, I want to focus on other topics; constitutional law, economics, and foreign policy to name a few prominent ideas that I have in my head.

But first, I wanted to share some final thoughts that I had on terrorism; primarily what it is and how do we solve it (if we can). The War on Terror has been the era in which I grew up and am still growing up in. I’ve lived through 9/11, Afghanistan, and Iraq, and now over the past few years, a wave of terrorism in the west conducted by the Islamic State of Iraq and Syria (ISIS).

What I find interesting is how violent attacks are categorized, as after some major attack, the police often tell the public that they “will not rule out terrorism” as something that motivated a particular attack. But what separates terrorism from other acts of violence? One could say that a terrorist attack is one that intends to create fear to achieve a political, religious, or some ideological aim, but in the field of terrorism studies, the definition is much more vague.

In a historical context, the term “terrorism” can trace its linguistic roots from the French word “terrorisme” which comes from the Latin word “terror” (great fear).[1] As Myra Williamson notes, “During the reign of terror, a regime or system of terrorism was used as an instrument of governance, wielded by a recently established revolutionary state against the enemies of the people. Now the term “terrorism” is commonly used to describe terrorist acts committed by non-state or subnational entities against a state.”[2]

Simon reported in 1994 that there are at least 212 different definitions of terrorism across the world, with 90 of them being used by governments and other institutions.[3] Several years previously, Schmid and Jongman compiled over 100 academic and official definitions of terrorism to identify shared components, and discovered the following:

(1) Concept of violence: 83.5% of definitions
(2) Political goals: 65%
(3) Causing fear and terror: 51%
(4) Arbitrariness and indiscriminate targeting: 21%
(5) Victimization of civilians/noncombatants/neutrals/outsiders: 17.5%.[4]

What this shows is that there are some elements that one would think should be shared by many definitions (like the bottom two on the list above), but are only a minority concept in the vary array of definitions. It is because of this loading of conceptual problems that a totally accepted definition is non-existent.

To give one instance of conflicting definitions, in Israel/Palestine, a public opinion poll conducted in December 2001 surveyed Palestinian reactions to two events of what are widely called terrorist acts. 98.1% of the Palestinians surveyed agreed or strongly agreed that “the killing of 29 Palestinians in Hebron by Baruch Goldstein at al Ibrahimi mosque in 1994” should be called terrorism, while 82.3% of the same respondents disagreed or strongly disagreed that “the killing of 21 Israeli youths by a Palestinian who exploded himself at the Tel Aviv Dolphinarium” should be called terrorism.[5]

In the post-9/11 world, the challenge of producing a coherent definition arguably worsened and still persists to this day. Alex Schmid’s two editions of books (published in 1084 and 1988) trying to find a coherent definition of terrorism all but brought success as he was still searching for a broadly accepted and reasonably comprehensive explication.[6] In 2011, Sshmid ended up updating his definition using 12 distinct points, chiefly among them being that “terrorism refers, on the one hand, to a doctrine about the presumed effectiveness of a special form or tactic of fear-generating, coercive political violence and, on the other hand, to a conspiratorial practice of calculated, demonstrative, direct violent action without legal or moral restraints, targeting mainly civilians and non-combatants, performed for its propagandistic and psychological effects on various audiences and conflict parties.”[7]

Attempts to define “terrorism” on an international scale has made little process. In briefing the Australian parliament, Angus Martyn explained that in the 1970s and 1980s, the U.N. attempted to develop an accepted comprehensive definition of terrorism, but it failed primarily due to differences of opinions about “the use of violence in the context of conflicts over national liberation and self-determination.”[8] As a result of the above-mentioned factors, international law professor Ben Saul wrote in argument for an all-encompassing definition of “terrorism”, saying that “If the law is to admit the term [terrorism], advance definition is essential on grounds of fairness, and it is not sufficient to leave definition to the unilateral interpretations of States. Legal definition could plausibly retrieve terrorism from the ideological quagmire, by severing an agreed legal meaning from the remainder of the elastic, political concept.”[9]

But what can we, as a community of people, governments, etc., do to stop terrorism? History tells us that terrorist groups have ended in several ways. Jones and Libicki did a study of all the active terrorist groups they could find between 1968 and 2006, of which were 648, where as of 2006, 136 splintered and 244 were still active. Of the 268 that ended, 43% converted themselves to non-violent political actions, the mot famous case being the IRA after the Good Friday Agreement), with 40% taken out by policing and intelligence services, victory of the group occurring 10% of the time, and the remaining 7% ending due to military force.[10]

Researcher Audrey Cronin lists three further primary ways that terrorist groups end, which are as follows:

(1) Capture or killing of a group’s leader (Decapitation)
(2) Group implosion or loss of public support (Failure)
(3) Transition from terrorism into other forms of violence (Reorientation)[11]

What is interesting and important to note with the Jones and Libicki study is that, when it comes to a group’s ending via military force, it is most effective when the group is an insurgency; large, well armed, very lethal, well organized.

Their quantitative analysis found several interesting findings, including the following:

(1) Religious terrorist groups take longer to eliminate than other groups. Approximately 62% of all terrorist groups have ended since 1968, but only 32% of religious terrorist groups have ended.
(2) Religious groups rarely achieve their objectives. No religious group that has ended achieved victory since 1968.
(3) Size is a significant determinant of a group’s fate. Big groups of more than 10,000 members have been victorious more than 25 percent of the time, while victory is rare when groups are smaller than 1,000 members.[12]

In today’s age, as the Global Terrorism Database shows, (numbers are for 2015, data for 2016 has not been produced yet), 7 out of the top 10 deadliest terrorist groups are motivated by radical Islam (ISIS, Boko Haram, the Taliban, al-Shabaab, Houthi extremists, al-Nusrah Front, and the Sinai Province of ISIS). Indeed, many of these most lethal attacks and attacks in general are against other Muslims, but ISIS and these other radical Islamic groups brushes them aside as (1) hypocrites and not true Muslims and (2) people who vote in the democratic process that ultimately authorizes what they call the “War on Islam.”[13]

But what is the best way to deal with these groups? In the case of ISIS (including its Sinai Province), western journalist turned hostage John Cantlie, writing for the ISIS English-language magazine Dabiq, wrote that negotiation is possible and “a truce with Western nations is always an option in Shari’ah law.”[14] In the editor’s note of another article dealing with the same subject, which was also written by John Cantlie and stressed negotiations and what he argues as the inevitability in accepting ISIS as a legitimate state, ISIS has noted that “a halt of war between the Muslims and the kuffar [non-Muslims] can never be permanent, as war against the kuffar is the default obligation upon the Muslims only to be temporarily halted by truce for a greater shar’i interest.”[15] But ISIS also gave another option, saying that they if one does not submit to the authority of Islam by becoming Muslims, they can submit “by paying jizyah, for those afforded this option, and living in humiliation under the rule of the Muslims.”[16]

In essence, ISIS cannot be negotiated unless it is on their terms, this also, in part, includes Boko Haram, known as their West Africa Province (Wilayat Gharb Afriqiyyah), although they have since mid-late 2016, the group split after ISIS appointed Abu-Musab al-Barnawi as the new leader of the wilayat. Its former leader Abubakar Shekau refused to accept al-Barnawi’s appointment and the group split.[17] From what I could gather, apparently this non-ISIS related faction lead by Shekau has been at least willing to negotiate, in the case of the releasing of the Chibok schoolgirls kidnapped in 2014.[18]

Peace negotiations with the Taliban[19], al-Shabaab[20], and Houthi extremists[21] have made some progress by varying degrees, they have all had some difficulties along the way due mostly to regional/local politics and the demands of some groups. It should be noted that al-Nusrah Front was dissolved earlier this year and was renamed as Hay’at Tahrir al-Sham, whose position on negotiations with the Syrian government would be akin to, in their words, “suppress[ing] the revolution and crown[ing] the butcher [Assad].”[22]

In essence, it is possible to negotiate with these groups, but the success or failure of such negotiations remains to be determined, while there should be other options on the table should they fail.

[2]: Myra Williamson, Terrorism, War and International Law: The Legality of the Use of Force Against Afghanistan in 2001 (Farnham, UK: Ashgate Publishing, 2013).
[3]: Jeffrey Simon, The Terrorist Trap (Bloomington, IN: Indiana University Press, 1994).
[4]: Alex Schmid and Albert Jongman, Political Terrorism: A New Guide to Actors, Authors, Concepts, Data Bases, Theories, and Literature Amsterdam, NL: Transaction Books, 1988).
[6]: Bruce Hoffman, Inside Terrorism: Second Edition (New York City, NY: Columbia University Press, 2006).
[9]: Ben Saul, “Defining ‘Terrorism’ to Protect Human Rights,” Sydney Law School Legal Studies Research Paper, No. 08-125 (2008).
[11]: Audrey Cronin, How Terrorism Ends: Understanding the Decline and Demise of Terrorist Campaigns (Princeton, NJ: Princeton Universty Press, 2009).
[14]: Dabiq, Issue 12
[15]: Dabiq, Issue 8
[16]: Dabiq, Issue 15

Terrorism in the West and Immigration: A Case Study

There has been a lot said about terrorism and immigration, with some claiming that immigrants and refugees rarely, if ever commit terrorism, at least in the case of radical Islamic terrorism. Curious about this claim, I decided to do some research into this over the past week to see if this claim had any merit. You can view my research in the link below.

I based this study of radical Islamic terrorist attacks that happened between May 24th 2015 and May 24th 2017 in Europe, Canada, and the United States, with particular attention paid to perpetrator’s native status in relation to the country he or she carried out the attack in and, of applicable, whether or not the perpetrator was a first or second- immigrant. I would like to note that there were some gaps in the information, labeled with “unknown” if such piece of information is not known and a question mark of there is some information indicating a particular piece of information to be correct, but still does not say it outright.

In this time frame, radical Islamic terrorists killed 434 people, and injured 1,707 with the majority of deaths and injuries coming from just 3 attacks; Nice (July 2016), Orlando (June 2016) and Paris (November 2015).

Based on native status alone, those who were native [in relation to the country he or she carried out the attack in] killed 132 (30.4%) and injured 347 (20.3%). On the flip side, non-natives killed a total of 124 (28.6%) and injured 619 (36.3%), while those with mixed identities (this category was for those with multiple perpetrators of different native statuses) killed 177 (40.8%) and injured 735 (43.1%), and those with an unknown native-status killed 1 (0.2%) and injured 6 (0.3%).

On the surface when only comparing native and non-native perpetrators, natives have a higher death count and those with mixed statuses have both a higher death and injury count. There is another interesting factor when it comes to natives, however, and that is their immigration generation status, be they first-generation immigrants (the ones who immigrants to a country) or second generation (their parents immigrated).

Without regard to native status, first-generation immigrants killed 125 (28.8%) and injured 615 (36%), second-generation immigrants killed 115 (26.5%) and injured 265 (15.5%), and a mixture of the two killed 144 (33.2%) and injured 392 (23%). The total for these categories are that first and second-generation immigrants killed 384 (88.5%) and injured 1,272 (74.5%). Those that were native and not a first or second-generation immigrant (i.e., purely native in regards to birth and ancestry) killed 11 (2.5%) and injured 74 (4.3%).

It is therefore a reasonable conclusion that radical Islamic terrorists that are non-native or are native AND a first or second-generation immigrant are the ones that cause the most casualties, but why is this? Part of this, I argue, has to do with integration, or lack of it.

From this, it can also be concluded that the reason natives do not kill as many is because of a sociological attachment to their country, although as far as I have seen, there is no literature on the subject. Although first-generation immigrants, as I showed in a previous post on this blog, are prone to commit less crime than natives, this does not seem to hold true for radical Islamic terrorism as evidenced by the data above.

In regards to second-generation immigrants, their identity is relatively unknown, as they are study between the identity (or identities) of their parents country and creating a new identity from the country they were born into. In general, there are “unique assimilation experiences and challenges faced by the children of immigrants”[1],

In the French/European context, Olivier Roy noted that “there is no such thing as third or fourth-generation jihadis [in France] […] This phenomenon of new jihadis in Europe is primarily a generational revolt. […] They [second-generation Muslim immigrants] [join the second-generation immigrants’ “estranged” Islam, which manifests in a generational, cultural and political rupture. In short, there is no point in offering them a moderate Islam. Radicalism attracts them by definition”[2] and “are raised in an environment where they will constantly be torn between the heritage and host culture of what they learn in the home and at school.”[3]

Sandra Bucerius notes that “research findings in European countries indicate that some second-generation immigrant groups have crime rates that drastically exceed those of the native-born population.”[4]

Ronald Inglehart and Pippa Norris argue that if the multiculturalism thesis is correct, then “any significant cultural differences among majority and minority populations are not be expected to diminish among second and third generation migrants; indeed if alienation from the West has occurred, as some observations suggest, then this could even potentially strengthen Muslim identities among younger populations.”[5]

Whether or not it is a lack of proper identity resulting in rebellion or the strengthening of religious identities as the result of alienation, these Muslims are targeted by extremist groups like ISIS as the result of this vulnerability. It is likely that among both first-generation and second-generation Muslim immigrants, the events going on in the wider Muslim world in regards to the West’s military operations.

When they see their countries, or the countries of their parents, attacked, it provokes a response in these young Muslims, which ISIS uses in attempt to encourage attacks abroad by Muslims residing in their own countries, as evidenced by a recent video published in mid-May by the Media Office of Wilayat Ninawa. In the video, American ISIS fighter Abu Hamza Al-Amriki [Abu Hamza the American] asked Muslims in the United States “does it not pain you to see your brothers with their honor having been violated, and their bodies having been torn into pieces by the American airstrikes and their destructive weapons?”[6]

Final note: When I started this, I wanted to solely get a three-year time frame and not deviate from that goal. After the two-year time frame however, there were two other attacks, one in London on June 3rd, and one in Paris on June 6th. The attackers in these two attacks, Khuram Butt [7] (born in Pakistan), Rachid Redouane [8] (born in Morocco or Libya), Youssef Zaghba[9] Born in Morocco), and Farid Ikken[10] (born in Algeria, moved to Sweden) were all foreign nationals.


European Immigration

While much has been said about this problem, I thought I should offer my own two cents. Immigration from the Middle East and northern Africa, especially concerning what happened in Germany lately, is concerning, to put it lightly. Its a hard fix, as I explained when I argued my own solution in a previous blog post, but it should be fixed nonetheless. Now, I do agree that we should help those that are in need, but at the risk that is in Europe right now after a year of constant ISIS attacks, its not really worth as much as it once was. This call for attacks in countries part of the global coalition against ISIS is a focal point of their ideology and propaganda efforts.

The other part of the problem is the quick pace of how these refugees entered Europe. The vetting process from the United Nations High Commissioner for Refugees is stringent, true, but what we don’t know for sure is the number of refugees mass emigrating to Europe were vetted by the UN. Furthermore, the risk of infiltration of the refugee flow by ISIS members on a statistical scale is low, and even then, they have highly encouraged immigrating to ISIS held-territory, not going to Europe, for in their mind, it is “a dangerous major sin.”[1] Furthermore, information from Frontex shows that between 2013 and 2015, there were 2,207,745 illegal border crossings from all main migration routes into the European Union (minus the Eastern Borders route between the EU’s eastern member countries and Russia, Belarus, Ukraine, and Moldova).[2] While I agree that we should help in some form, as I stated before, the too many refugees in too short a time span without much proper vetting presents a problem. Even then, there still is the possibility that some could be radicalized after the vetting process and once they’re in the country, something that could be stopped, and has produced warning signs within European intelligence organizations, but their infighting and other internal problems does not help with the situation.

In short, further immigration efforts to Europe should be conducted legally, with strict vetting, and quota limits should apply.

The risks for this are just too high for the current policy to continue. Consider the following information about jihadi attacks [mostly done by ISIS] in Europe over the past 2 years compiled by terrorism researcher Thomas Hegghammer.

Between 2014 and 2016, jihadi attacks killed 273 people, more than in all previous years combined (267). Between 2011 and 2015, almost 1,600 people were arrested in jihadism-related investigations in the EU (excluding the UK); an increase of 70% compared with the previous five-year period. In 2015 and 2016, there were 14 jihadi attacks, about 3.5 times more than the biannual average (6) for the preceding fifteen years, [there] were 29 well-documented attack plots, about 2.5 times more than the biannual average (12), [and about] half of the serious plots reached execution, compared with less than a third in the preceding fifteen years.[3]

[1]: Dabiq, Issue 11, Page 23

Denmark a Socialist Paradise?

I need to start posting more here.

So hopefully you all won’t mind some heavy reading to keep you wanting more after a two month absence.

In this installment, I am responding to an article posted by US Uncut, those people who are full of economic ignorance, that I don’t even know how to respond… until I did. The original article is here if you want to look at it first.

Here are 9 reasons Denmark’s socialist economy leaves the US in the dust

You done reading? Alright, time to do my thing. The three main differences between the United States and Denmark (this can also apply to the other Nordic countries) is as follows.

1): Currency exchange rates between the two countries. As of Sept. 25th, 2016 at 3:47am UTC, $1 US dollar was worth 6.64 Danish Krone.[1]

2): The Population is the starker contrast you see on the surface. The population of the United States is 317,780,510 as of 2014[2], while the population of Denmark is (as of 2014 as well), 5,627,235.[3] Thus, the United States population is 5,547.2% bigger than Denmark. I put this into perspective because when trying to emulate programs that other countries have, the United States should at least be cautious, due to the number of people involved and the amount of money spent. Denmark’s is less than the population of New York city (with a population of 8.5 million)[4] and a smaller population than (using 2015 numbers) 20 of our states here in the US.[5]

3): The culture differences between the two countries, specifically business culture and work ethic. In 2007, six economists from Denmark, Finland, and Sweden noted that, although Nordic culture is highly secular, it “is strongly influenced by the Lutheran faith, which gives prominence to a strong work ethic and solidarity between members of society (and even conformist pressures). The long history of independent farmers and the tradition of local self-governance are other features worth noting.”[6] Robert Pateman writes that this work ethic “This work ethic is established during childhood when Danish children are encouraged to find afterschool jobs, such as delivering newspapers of leaflets At the same time, it is very much part of Danish culture that work should not be allowed to dominate life.”[7]

The United States started out the same way, as Steven Malanga writes “sociologist Max Weber dubbed the qualities that Tocqueville observed the “Protestant ethic” and considered them the cornerstone of successful capitalism. Like Tocqueville, Weber saw that ethic most fully realized in America, where it pervaded the society. Preached by luminaries like Benjamin Franklin, taught in public schools, embodied in popular novels, repeated in self-improvement books, and transmitted to immigrants, that ethic undergirded and promoted America’s economic success.”[8] Mangala continues, “After flourishing for three centuries in America, the Protestant ethic began to disintegrate, with key elements slowly disappearing from modern American society, vanishing from schools, from business, from popular culture, and leaving us with an economic system unmoored from the restraints of civic virtue.”[9]

In 2010, a Pew Research Center report showed that Millennials are the only one that doesn’t cite “work ethic” as one of their principal claims to distinctiveness, whereas it showed up in the Gen X (11%) the Boomer generation (17%) and the Silent generation (10%). The reason for this is, as the Pew Research Center notes, is that “Millennials may be a self-confident generation, but they display little appetite for claims of moral superiority.”[10] Dan Schawbel from Forbes argues that “The pursuit of happiness and the American Dream drove progress and innovation, but they came with unintended side effects. In many cases, for instance, healthy ambition has morphed into avarice. Urbanization and an emphasis on large-scale businesses means fewer and fewer kids are learning about work in the natural course of family life.”[11]

One final thing to point out from the crux of the argument, Denmark is not a socialist country. For one thing, socialism is, as Business Dictionary notes, “a national financial system based on the public or cooperative ownership and administration of primary production capabilities […] economic systems typically employ central planning and use accounting systems based on the labor hours expended in production.”[12]

That same report I used for the cultural differences, their report also said the following regarding the Nordic model. “a straw man version of the Nordic model. This is the perception of the Nordic model as a socialist experiment with stifling taxes and heavy-handed regulation where paternalistic bureaucrats decide the fate of citizens from cradle to grave. Presumably such a model is neither efficient nor desirable on other grounds. […] Clearly the straw man version of the Nordic model needs to be amended.[13] And to further the point home, Danish Prime Minister Lars Løkke Rasmussen said last year speaking at Harvard’s Kennedy School of Government said the following: “I know that some people in the US associate the Nordic model with some sort of socialism. Therefore I would like to make one thing clear. Denmark is far from a socialist planned economy. Denmark is a market economy. The Nordic model is an expanded welfare state which provides a high level of security for its citizens, but it is also a successful market economy with much freedom to pursue your dreams and live your life as you wish.”[14]

With that out of the way, onto the main points!

1. Indeed, this is true, and can be thought of as a basic unemployment insurance plan. While the rates vary by state, “workers can collect the payments for as long as 99 weeks in states with the highest unemployment rates”[15] This is 5 weeks less than the up to 2 year period for the Danish. “States determine the amount of the benefits, but they average 36 percent of the average weekly wage, according to the National Employment Law Center.”[16], so they also got the “keep 90% of their salary” going for them. But has such a policy worked for the United States, given our limitations compared to our Nordic counterparts? I see the good intention of unemployment benefits, giving people something to live on while they search for another job. Working on 6 months of data before and after extended unemployment benefits were cut in early 20145, Jeffrey Dorfman writes that “In the six months before ending the extended unemployment benefits, total employment increased by 511,000. In the six months after the benefits stopped, employment rose by 1,635,000. That means employment gains were three times as fast after ending the extended benefits. It also translates into over one million more people working than if the trend from the previous six months had continued. […] Clearly, ending extended unemployment benefits did not cause a surge of people to give up and stay at home on the couch. Rather, we went from about 350,000 per month leaving the labor force to only 50,000 per month. At the same time, job gains went from 85,000 per month to over 270,000.”[17]

Even then, the benefits would have cost $26 billion per year according to the Congressional Budget Office[18], and BLS data shows that the long-term unemployment is still high, despite the extension of benefits.[19] This also contributes to systemic long-run unemployment, whereas the longer workers remain out of the job market, their skills become obsolete and the likelihood of remaining unemployed increases.[20] So how can this be fixed? A more effective approach for workers would be the establishment of Unemployment Insurance Savings Accounts, funded by a percentage of wages contributed by the employee and employer.[21] But there can be a positive spin on this too, specifically with the introduction of personal savings. As Tim Harford argues, “those without their own cash reserves are using unemployment benefits to buy themselves time to find the right job.”[22] The working age population statistics are correct to a degree, but behind the backdrop of why the numbers are the way they are tells a different story. As of last month, the U-3 unemployment rate (the “official unemployment rate”) was 5.0%. This sounds like a good thing, only until you look at the U-6 unemployment rate, which includes the total unemployed, plus all persons marginally attached to the labor force, plus total employed part time for economic reasons, as a percent of the civilian labor force plus all persons marginally attached to the labor force at 9.7%, almost double the “official” rate.[23] That brings us to our other number, the Civilian Labor Force Participation Rate. Currently, it is 62.8%, somewhat close to the 67% of Americans having jobs claim from the article.[24] But looking at things historically, this is also close to the same rate as we had in early 1979 and since 2009, an astonishing 13,128,000 people have left the work force.[25] As Kevin Ryan points out, “The bottom line is that fewer people are working today than at any point in at least a generation. This lower labor force participation rate presents a challenge for underfunded programs like social security, which rely on new workers to pay into it so that retiring baby boomers can receive their promised benefits, and hinders economic growth.”[26] For youth workers, this is a problem, also coupled by the policy of the minimum wage. I won’t go much into it, but a summary from the Employment Policies Institute is worth reading. “High minimum wage rates lead to unemployment for teens. One of the prime reasons for this drastic employment drought is the mandated wage hikes that policymakers have forced on small businesses. Economic research has shown time and again that increasing the minimum wage destroys jobs for low-skilled workers while doing little to address poverty.”[27]

2. The taxpayer-funded universal healthcare part I will not get into due to the topic being too long. But I will say that it might (there is a bit of controversy on the topic) economically feasible for a small country like Denmark, but not so for a large country for the United States. Nonetheless, it is true that Denmark does on average pay around $3,000 less per capita on healthcare, but they are still above the OECD average (as a share of GDP)[28], and the numbers themselves are outdated. As of 2014 (the latest World Bank data available), Denmark’s per capita spending is $6,463[29] while the United States’ per captia spending is $9,402[30], still nearly $3,000 apart per capita. the OCED also reports that in 2013, Denmark “saw increases in private spending, as user charges increased slightly for certain health care services and goods.”[31] It’s also worth noting that Denmark spends only 11.2% of its GDP on healthcare[32], while the United States spends 17.9% of GDP on healthcare.[33] Even though Denmark has a public health system, it is run in such a way that there is less national involvement, and is more efficient. It’s not necessarily a larger government causing the better outcome, but more a more efficient system based more on local and regional needs.

3. Its ironic, seeing how the world’s happiest nation is Europe’s second most nation that takes antidepressants behind Iceland.[34] “Life satisfaction and income are highly correlated both across country averages and across individuals within a country, as pointed out in a comprehensive study of the literature by Betsey Stevenson and Justin Wolfers,” explains Otto Brøns-Petersen. “Furthermore, as pointed out by Christian Bjørnskov, a high level of trust also seems to increase life satisfaction, and, as Danes are quite trustful, that might play a role here too.”[35]

4. A good work-life balance is a good thing, but that mostly has to do with the Danish culture. If by “work-life balance” they mean the number of hours worked, Demark is in the #2 spot with, according to the same CNN article, the Netherlands coming out on top with an average of 29 hours a week[36], who is also part of the OECD.[37] The average US worker putting in an average of 47 hours a week is an incomplete number, as it only counts for full-time jobs. The same Gallup source notes that “Part-time workers have averaged about 20 hours per week less than full-timers.”[38] Last year, according to BLS statistics, the United States had 148,833,000 people employed, of which 121,492,000 (81.63%) were full-time and 27,341,000 were part-time (18.37%).[39] In other words, the “47 hours a week” does not count nearly 20% of American workers. Nonetheless, the average hours for American workers as a whole is 38.6 hours[40], close to the 37 hours a week the Danish have, not to mention that there is no government-mandated work week in Denmark. And think of the bigger picture here. The average annual worked by Americans dropped from 1983.7 in 1950 to 1764.5 in 2014.[41] Furthermore, productivity has gotten way better over the past 60 years, according to data gathered by Erik Rauch, “An average worker needs to work a mere 11 hours per week to produce as much as one working 40 hours per week in 1950.”[42] The stats on paid vacation are true for Demark, but a little mixed for the United States. 77% of workers get paid vacation per year, going off of 2012 numbers, of which depend on how long you’ve worked for the company, be it 1 year (10 days), 5 years (14 days), 10 years (17 days), and 20 years (20) days.[43] Costs are indeed rising due to inflation, you can blame the Federal Reserve for that one. And one could say wages are stagnating, but they aren’t the only things to look for because they leave out benefits in wage statistics. Since 2000, wages and benefits have gone up 66% while inflation went up 39%. Even if we adjust for the median rather than the mean, compensation growth (49%) still exceeds inflation (39%).[44]

5. Indeed, the cost of college has become more expensive in the United States within the past 30 years, mostly due to inflation and government loans. Using the example of the University of Pennsylvania, tuition in 1950 was $600 a year[45], while last year, it was $43,838 a year[46], showing a percentage increase of 7,206.33%! That $600 a year education in 1950 would cost $5,996.34 a year when adjusted for inflation[47], which is not that bad, all things considered. The other thing is government aid. If you artificially inflate demand for something and don’t let supply adjust, prices will go up, and this holds true for college education. Last year, the Federal Reserve Bank of New York reported that “institutions more exposed to changes in the subsidized federal loan program increased their tuition disproportionately around these policy changes. […] The point estimates indicate that increases in institution-specific subsidized loan maximums lead to a sticker-price increase of about 60 cents on the dollar, and that increases in the unsubsidized loan and Pell Grant the per-student maximums are associated with sticker-price increases of 15 cents on the dollar and 40 cents on the dollar, respectively.”[48] Richard Vedder writes that “From 1910 to about 1978, tuition fees rose about 1% a year adjusting for inflation–explainable by the Baumol Effect (colleges are a service industry with no opportunity for productivity advance). Since 1978, fee increases have over doubled, to closer to 3% a year, reflecting the enormous growth in student loan and grant programs.”[49] Pascal-Emmanuel Gobry also informs us that “US colleges that don’t accept Federal loans have tuition roughly half of their similarly-ranked peers.”[50] And what about other factors? Washington Monthly reports that “over the same period [1975-2005], the faculty-to-student ratio has remained fairly constant. […] In 1975, colleges employed one administrator for every eighty-four students and one professional staffer, admissions officers, information technology specialists, and the like, for every fifty students. By 2005, the administrator-to-student ratio had dropped to one administrator for every sixty-eight students while the ratio of professional staffers had dropped to one for every twenty-one students.”[51] The Wall Street Journal reports that within American higher education, “the number of employees hired by colleges and universities to manage or administer people, programs and regulations increased 50% faster than the number of instructors between 2001 and 2011, the U.S. Department of Education says. It’s part of the reason that tuition, according to the Bureau of Labor Statistics, has risen even faster than health-care costs.”[52] The article also notes that colleges compete by offering “fancier dorms, dining halls, gyms and other amenities, to raise their rankings and attract students.”[53] For Denmark, school can cost some people money, specifically those outside the European Union, and if you 1), do not have a permanent or temporary residence permit or 2), do not have a residence permit as the accompanying child of a non-EU/EEA parent holding a residence permit based on employment. If that is the case, actual tuition can range between $8,000 and $21,000 USD.[54] Moreover, independent education has a long tradition in Denmark, where “the free choice of school and education is of central importance to a well-functioning education system. Apart from the fact that it is a goal in itself to give the students a free choice, a free choice of school and education will also further the schools’ initiative and industry.”[55]

6. In a previous point also, it was mentioned that Danes earn an average of $46,000 USD annually. This is true, to an extent. But what about adjusting it for GDP per-capita PPP (Purchasing Power Parity), that is, “the total adjustment that must be made on the currency exchange rate between countries that allows the exchange to be equal to the purchasing power of each country’s currency”?[56] When looking on GDP per capita when not adjusting for PPP, Denmark is in the lead, with a GDP per capita of $58,207.90 US dollars in 2015[57], while the United States had one of $51,486 that same year.[58] Adjusting for PPP, however, shows the United States has a GDP (PPP) per capita of $52,549.01 in 2015[59], while Denmark’s GDP (PPP) per capita is $43,415.23.[60] Furthermore, using the Geary–Khamis dollar (the 2000 international dollar, as it were)[61], data from the World Bank[62], International Monetary Fund[63], and CIA[64] show in all cases, the GDP (PPP) per capita is anywhere between 9,202 and 10,500 International dollars higher in the United States than in Denmark. The taxation system should also be taken into account, although to keep it simple, I’m going to only use federal income taxes (as if someone was single, not married) at the average income for each country in their respective currency (not adjusting for PPP). Average wages are 38,957.98 krone a month[65], which is $5,873.85 USD[66], or 467495.76 krone annually, which is $70,408.08 USD[67] Again, not adjusting for PPP. In the United States, the average annual wage is $44,510.[68] Now for the taxes! For Denmark, there is a tax that is due before the income tax is due, and that’s at 8%.[69] Then there is the income tax, which is 55.8%[70] making the total 239,993.623 krone, or $36,103.57 USD. The tax rate for the average annual wage for Americans is 25%[71], so the American take home pay would be $33,382.50. Not that big of a difference, to be sure, but once again, I’m not adding PPP here. Just simple currency conversions. And then there’s the standard of living, where Daniel Mitchell notes that data from the OECD and the Danish Finance Ministry “suggest that Americans enjoy higher living standards than their Danish counterparts.”[72] Even Danish-Americans have a higher standard of lving here in the United States than the Danes in Denmark, by about 55% higher.[73] Cost of living is a give and take, where consumer prices, restaurant prices, and local purchasing power are higher in Denmark by 15.72%, 47.11%, and 1.49% respectively. On the flip side, rent prices and grocery prices are lower in Denmark are by 11.21% and 10.88% respectively.[74]

7. By who’s definition are we defining poverty? Each country has their different poverty threshold, but we can go to some international figures to show a different side of the story. The Human Poverty Index shows that Denmark’s poverty rate is 8.2% while the United States’ poverty rate is 15.4%.[75] Moreover, the OECD reports that the United States has a poverty rate of 17.3%[76], while Denmark has theirs at 6.1%.[77] Also, one look at some of the possessions Americans living in poverty own confirms the idea that they are indeed wealthy compared to people in other countries.[78]

8. And why is the US raked #18 for best country for business? Looking at the updated 2015 version, we find that Denmark is still #1, but the United States is ranked #21, worse than what it was the year before. The United States “scores poorly on monetary freedom and bureaucracy/red tape [among other factors]. More than 150 new major regulations have been added since 2009 at a cost of $70 billion, according to the Heritage Foundation. […]” while Denmark scores “particularly well for freedom (personal and monetary) and low corruption. The regulatory climate is one of the world’s “most transparent and efficient,” according to the Heritage Foundation.”[79] Furthermore, the corporate tax rate is more business friendly in Denmark than it is in the United States, with Denmark having a corporate tax rate of 23.5%[80] while the United States has theirs at 38.9%[81] But how can they tax so much and get away with such prosperity? Henrik Kleven explains that “far-reaching information trails that facilitate tax compliance, broad tax bases that limit the scope of legal tax avoidance, and large public spending focused on complements to work” are the key to success.[82] However, Kleven also tells us that “social and cultural factors may make it easier to enact these kinds of policies, and in turn the social and cultural norms may themselves be driven by the design of policies and institutions, and warns that “replicating the Scandinavian policies and institutions in societies that are fundamentally different is unlikely to be achievable or perhaps even desirable. The point is instead for countries everywhere to think carefully about how to collect taxes and redistribute income with less distortion from tax evasion, tax avoidance, and reduced labor supply, and the Scandinavian experience may provide ideas on how to expand the conversation about these important questions.”[83]

9. Wrong, new American parents do get something. Only when its not mandated by the government is it “nothing”. Ernst & Young leads the nation with 39 weeks of maternity leave[84], while Reddit leads with 17 weeks for paternity leave.[85] 12% of American workers get paid family leave, something that has improved over time from the 1% in 1992.[86] The Cato Institute wrote a paper back in 1988 highlighting some concerns with then-proposed family-leave legislation. They concluded that “the presence of more women in the labor market could bring about more employer discrimination against women. Women with education, experience, and training would fare better than those without. Women beyond childbearing age would fare better than younger women. And single women would benefit at the expense of married women.”[87] Claire Cain Miller of the New York Times made an analysis of 22 countries that implemented a maternity leave policy, and found that they hurt women it was trying to help, and even if they had jobs, they’d be “dead-end” positions and less likely to be managerial posts. Moreover, women are slightly more likely to stay employed, but receive fewer promotions because of the American Family Medical Leave Act (and that just counts for is just unpaid leave!).[88] Even then, studies have shown that it can help, but it can also harm too. Expanding the length of the woman’s maternity leave [from 18 to 35 weeks] does not add additional benefit to the child’s welfare nor does it create benefits for the couple’s marriage[89]. On a somewhat positive note, maternity leave does somewhat give the added benefit of improved child development, however, “the estimates suggest a weak impact of the increase in maternal care on indicators of child development.”[90] Although maternity leave does not cause an adverse interruption of a woman’s career progression at first glance, it doesn’t help that an extended maternity leave does cause human capital depreciation for women, which still causes some issues for a woman’s career projections in the long term. The OECD reports that “women who make full use of their maternity or parental leave entitlements receive, on average, lower wages in the years following their resumption of work than those who return before leave expires [and] can permanently damage [mothers’] ability to achieve their labor market potential.”[91] “It’s all about demand, supply, and prices,” notes Nita Ghei, senior policy research editor for the Mercatus Center at George Mason University. “When the after-tax wage (the “price”) increases, more women will be willing to work (that is, supply increases). A tax cut has the advantage of not increasing costs to employers, so there is no decrease in demand, as there would be with a mandated paid leave provision.”[92] “To acquire and retain quality employees,” says Laurence Vance, “most employers offer employees a variety of fringe benefits, vacation pay, sick leave, paid time off, holiday pay, jury-duty pay, child care, and discounts on food, merchandise, or services, none of which is mandated by the government. It should be no different with family leave. Whether an employer offers it, whether it is paid or unpaid, and what the length of it is, is a matter to be settled by agreement between the employer and employee.”[93]

The last part of the article ends questioning the reader if this “socialist” dream is “an ideal vision of what Americans could have if we came together and demanded it from our government?” Here’s the thing, it could very well be what Americans could have, this is indeed possible. However, there is a catch, two, in fact. The first, is that to achieve these lofty goals, an amendment to the Constitution must be put fourth. One might argue (wrongfully) that this is Constitutional because, according to Article I, Section 8, Clause 1 of the Constitution which says Congress “shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States”. And that last portion of the clause “provide for the common Defence and general Welfare” is also in the preamble, but even then, the Preamble is merely the intention of the overall document and has no legal binging. As Justice Harlan noted in Jacobson v. Massachusetts, “Although that Preamble indicates the general purposes for which the people ordained and established the Constitution, it has never been regarded as the source of any substantive power conferred on the Government of the United States, or on any of its Departments. Such powers embrace only those expressly granted in the body of the Constitution.”[94] Furthermore, constitutional scholar Robert Natleson notes that the preamble “is merely declaratory of a limitation the Founders believed inherent in free government and does not have force beyond that.”[95] But even in the body of the constitution where the “general welfare” is mentioned, it still is not a blanket government-can-do-whatever-it-wants power, for if it were that, the rest of Article I, Section 8 would not be needed. Early Constitution commentator noted the following is his commentary on the Constitution. “A power to lay taxes for any purposes whatsoever is a general power; a power to lay taxes for certain specified purposes is a limited power. A power to lay taxes for the common defense and general welfare of the United States is not in common sense a general power. It is limited to those objects. It cannot constitutionally transcend them.”[96] Natleson further explains that “the General Welfare Clause was an unqualified denial of spending authority. It did not add to federal powers; it subtracted from them. The General Welfare Clause was designed as a trust-style rule denying Congress authority to levy taxes for any but general, national purposes.”[97]

And what better way to clear up this ambiguity best with the words of James Madison, the father of the Constitution. In Federalist 45, written in 1788, James Madison explained the delegated powers in the Constitution as ““few and defined. Those which are to remain in the State governments are numerous and indefinite. The former will be exercised principally on external objects, as war, peace, negotiation, and foreign commerce; with which last the power of taxation will, for the most part, be connected. The powers reserved to the several States will extend to all the objects which, in the ordinary course of affairs, concern the lives, liberties, and properties of the people, and the internal order, improvement, and prosperity of the State.”[98] In 1792, Madison wrote to Henry Lee about Hamilton’s Report on Manufactures, a reaction, if you will. He summed up his views on the General Welfare clause when he penned the following. “The federal Govt. has been hitherto limited to the Specified powers, by the greatest Champions for Latitude in expounding those powers. If not only the means, but the objects are unlimited, the parchment had better be thrown into the fire at once.”[99] Madison’s last veto message in 1817, one on the Internal Improvements Bill demonstrated why he rejected the bill on constitutional grounds. “To refer the power in question to the clause “to provide for the common defense and general welfare” would be contrary to the established and consistent rules of interpretation, as rendering the special and careful enumeration of powers which follow the clause nugatory and improper. Such a view of the Constitution would have the effect of giving to Congress a general power of legislation instead of the defined and limited one hitherto understood to belong to them, the terms “common defense and general welfare” embracing every object and act within the purview of a legislative trust. “[100] Even in retirement, he wrote the following in 1831. “With respect to the words “General welfare” I have always regarded them as qualified by the detail of powers connected with them. To take them in a literal and unlimited sense, would be a metamorphosis of the Constitution into a character, which there is a host of proofs was not contemplated by its Creators.”[101]

To be sure, this is only one founding father, but as Natleson has mentioned, this was something agreed on by the majority of the founders. Of those on the dissenting opinion, Hamilton seems to be the only one of my knowledge that rejected this idea. And even furthermore, if an amendment were to get passed, I wish you luck. In 225 years, 11,623 constitutional amendments were proposed, and only 27 were accepted, a success rate of 0.23%.[102] And amendment you want to instill the Nordic welfare model into the US would have to go through either two-thirds of the House and Senate approving the proposal or a convention called by two-thirds of the legislatures of the States. Either way, it still has to get the approval of three-fourths of the states before it becomes part of the Constitution. With that being said, the best way these programs could work, given population sizes, is on the state level. But even then, state constitutions would need to be amended if this were to be done.

In conclusion, although it is true that Denmark does provide some services from the government that the United States does not, they do so mostly at the municipal and regional (equivalent to our local and state) governments, rather than at the nation (our federal) level. Denmark recognizes the benefit of decentralization and, as a result, they manage to beat the United States in economic freedom. Denmark’s government is much more efficient, and it utilizes the equivalents of our county and state governments, whereas this article seems to want all of these things under Federal control, a pipe dream that simply doesn’t stand up to the facts.

[9]: Ibid.
[16]: Ibid.
[21]: Ibid.
[26]: Ibid.
[53]: Ibid.
[83]: Ibid.

The 400 Million Dollar Question

Did the Obama Administration give $400 million dollars to Iran in exchange for hostages? In a word, no. Let me explain. The $400 million that the United States sent to Iran is part of a larger installment of $1.7 billion[1], of which $1.3 billion is interest.[2]

The money was not for the 4 American hostages freed the day after, that was a separate deal (and negotiating team)[3] that lasted for about a year, one that involved five Americans, including Pastor Saeed Abedini and Washington Post journalist Jason Rezaian in exchange for 7 Iranians, six of whom are dual American-Iranian citizens, even though none of them have as of yet returned to Iran.[4]

The $400 million sent to Iran was their money to begin with. Before the Iranian revolution in 1979, the government under Shah Reza Pahlavi requested and paid for some U.S. fighter jets.[5] The jets were never delivered, and all Iranian assets were frozen during the hostage crisis. The Iranians didn’t get what they paid for and wouldn’t be refunded. The Algiers Accords in 1981 sought to settle some financial disputes between the two countries, but did not resolve everything. The Iran–United States Claims Tribunal was then set up to resolve these other issues, to which Iran wanted $10 billion for the original $400 million dispute.[6]

“U.S. officials had expected a ruling on the Iranian claim from the tribunal any time, wrote Associated press reporter Matt Lee, “and feared a ruling that would have made the interest payments much higher.”[7] This deal wasn’t secret either. Obama discussed it back in January, saying that “for the United States, this settlement could save us billions of dollars that could have been pursued by Iran. So there was no benefit to the United States in dragging this out. With the nuclear deal done, prisoners released, the time was right to resolve this dispute as well.”[8]

And the United States has also benefited from the Claims Tribunal during its first 20 years, to the tune of $2.5 billion in awards to U.S. nationals and companies.[9] Both the 35-year-old settlement and the hostage situation ended up being resolved just a day apart, hitting two birds with one stone. Whether or not it was a ransom payment, the Iranians would have gotten the money no matter what.

“Iranian press reports have quoted senior Iranian defense officials describing the cash as a ransom payment”, noted the Wall Street Journal in the article that started the entire controversy, albeit 7 months late.[10] Reportedly, senior Justice Department officials objected to paying the $400 million over concerns that the Iranians would consider it a ransom payment, sending some wrong signals.[11] But that’s just Iran being Iran, trying to look strong to their domestic audience.

Nonetheless, I do believe that the actual deal could have gone through more legal channels. As Andrew C. McCarthy puts it, “The law on which the anti-terrorism sanctions are based gives the president broad waiver discretion. [Obama] could have issued a waiver in order to enable our government to pay Iran.”[12] Not to mention that I am also concerned about where the money would actually go, seeing that Iran is a state-sponsor of terrorism. Where it would end up though, I have no idea.

But in the end, using the words of acting director of the Future of Iran Initiative at the Atlantic Council Barbara Slavin, “It was an opportunity for countries with no diplomatic relations to clear away a number of diplomatic disputes. For the U.S, it was important to get back the detained Americans, and the Iranians wanted their seven citizens out of jail.”[13]
















Its Time to Audit the Pentagon

A Department of Defense internal review by its Inspector General shows that “the Office of the Assistant Secretary of the Army and the Defense Finance and Accounting Service Indianapolis did not adequately support $2.8 trillion in third quarter journal voucher [a written authorization prepared for every financial transaction, or for every transaction that meets defined requirements] adjustments and $6.5 trillion in yearend JV adjustments made to AGF [Army General Fund] data during FY 2015 financial statement compilation. […] In addition, DFAS Indianapolis did not document or support why the Defense Departmental Reporting System‑Budgetary, a budgetary reporting system, removed at least 16,513 of 1.3 million records during third quarter FY 2015.”[1]

In other, non-technical words, the Pentagon has not adequately accounted for $6.5 trillion for FY 2015 (and perhaps even further back) and are missing at least 16,513 for the third-quarter of FY 2015 alone.

In 2013, Reuters found out that “the Pentagon is largely incapable of keeping track of its vast stores of weapons, ammunition and other supplies; thus it continues to spend money on new supplies it doesn’t need and on storing others long out of date. […] A review of multiple reports from oversight agencies in recent years shows that the Pentagon also has systematically ignored warnings about its accounting practices.”[2]

Back in 1996, Congress passed a law that would audit every federal agency[3] and in 2009, Congress said they would ensure that “the financial statements of the Department of Defense are validated as ready for audit by not later than September 30, 2017″[4] To date, the Pentagon/Department of Defense is the only agency that has failed to be audited. That the $8.5 trillion dollars in taxpayer money given to the Pentagon since 1996 has never been accounted for, at least in full.

There have been some small things that came up over the years, some of which the Fiscal Times compiled into a list last year. Spending $1 billion to destroy $16 billion worth of ammo it didn’t actually need. Not keeping tabs on $300 million to help fund the payroll of the Afghan National police. Failure to track $500 million worth of military equipment given to Yemen since 2007. Overcharged $1 billion by federal contractors (who dodged work hours and neglected safety requirements) for loose bolts and damaged aircraft. Spending $900 Million more than estimated on naval ships.[5]

I can go on about the wasteful spending, but in the end, I want to offer a solution. The Pentagon needs to be audit ready by September 2017, as I previously mentioned. If they aren’t, then I suggest that the Congress should not accept any new funding requests by the Pentagon until an audit is at least started. The Pentagon can get by with the $585 billion they requested for FY 2016. A small price to pay for 20 years and $8.5 trillion of unaccounted funds.